Skip to main content
Top

Open Access 07-11-2024

Gabor Phase Retrieval via Semidefinite Programming

Authors: Philippe Jaming, Martin Rathmair

Published in: Foundations of Computational Mathematics

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We consider the problem of reconstructing a function \(f\in L^2({\mathbb R})\) given phase-less samples of its Gabor transform, which is defined by
$$\begin{aligned} {\mathcal {G}}f(x,y) :=2^{\frac{1}{4}} \int _{\mathbb R}f(t) e^{-\pi (t-x)^2} e^{-2\pi i y t}\,\text{ d }t,\quad (x,y)\in {\mathbb R}^2. \end{aligned}$$
More precisely, given sampling positions \(\Omega \subseteq {\mathbb R}^2\) the task is to reconstruct f (up to global phase) from measurements \(\{|{\mathcal {G}}f(\omega )|: \,\omega \in \Omega \}\). This non-linear inverse problem is known to suffer from severe ill-posedness. As for any other phase retrieval problem, constructive recovery is a notoriously delicate affair due to the lack of convexity. One of the fundamental insights in this line of research is that the connectivity of the measurements is both necessary and sufficient for reconstruction of phase information to be theoretically possible. In this article we propose a reconstruction algorithm which is based on solving two convex problems and, as such, amenable to numerical analysis. We show, empirically as well as analytically, that the scheme accurately reconstructs from noisy data within the connected regime. Moreover, to emphasize the practicability of the algorithm we argue that both convex problems can actually be reformulated as semi-definite programs for which efficient solvers are readily available. The approach is based on ideas from complex analysis, Gabor frame theory as well as matrix completion. As a byproduct, we also obtain improved truncation error for Gabor expensions with Gaussian generators.
Notes
Communicated by Rachel Ward.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Phase retrieval problems are an important and common class of problems in the physical sciences with applications ranging from diffraction imaging [38] over audio signal processing [21] to quantum mechanics [34]. In mathematical terms, phase retrieval is concerned with the reconstruction from phaseless linear measurements. More formally, given \(T:X\rightarrow X'\) a linear operator between two complex function spaces X and \(X'\), consider the map
$$ \mathcal {A}:f\mapsto |Tf|^2,\quad f\in X $$
where |Tf| denotes the pointwise modulus of Tf, i.e. \(|Tf|(\omega )=|Tf(\omega )|\). Solving the phase retrieval problem amounts to solving the inverse problem associated with the forward operator \(\mathcal {A}\). Clearly, \(\mathcal {A}f=\mathcal {A}h\) if \(h =cf\) with \(|c|=1\). To remove this obvious source of ambiguity one does not distinguish between functions which coincide up to a multiplicative factor of unit modulus,
$$ f \sim h \quad \Longleftrightarrow \quad \exists c\in {\mathbb C},\,|c|=1:\quad h=cf. $$
In a nutshell, the reconstruction task is to come up with the left inverse \(\mathfrak {L}\) of \(\mathcal {A}\), that is, a map satisfying
$$ \forall f\in X:\quad \mathfrak {L}(|Tf|^2) \sim f. $$
It is important to note that it is by no means clear that the left inverse exists. For many instances of phase retrieval problems it is already difficult to decide the question of uniqueness, i.e., whether \(\mathcal {A}\) is injective on \(X/\!\sim \) or not. Typically, there are three fundamental aspects studied within the scope of phase retrieval: uniqueness, stability and actual reconstruction. In terms of the left inverse, these categories correspond to existence of \(\mathfrak {L}\), continuity of \(\mathfrak {L}\) and construction of \(\mathfrak {L}\), respectively.
Ideally, given a concrete phase retrieval problem one would like to have a good understanding of all three aspects. Uniqueness and stability address the issue of well-posedness. Algorithmic methods to reconstruct are of fundamental importance for practical applications. In addition – as for any other numerical method – performance guarantees and a-priori estimates for the error are highly desirable, as is the capability of the method to deal with noisy input data.
Generally speaking, the algorithmic solution of phase retrieval problems is a notoriously delicate matter. Different variations of algorithms based on alternating projections have been around for decades, such as the methods by Gerchberg-Saxton [19] and by Fienup [18] to name just two. These schemes have the advantage of being uncomplicated to implement. However, the absence of convexity makes them difficult to analyse, which is a major drawback when it comes to establishing convergence guarantees. A rather recent trend is based on the idea of lifting the problem into a higher dimensional space and solving a convex reformulation (or relaxation) there. For a number of algorithms of this type, results have been established which guarantee the accuracy of the method in a sufficiently randomized setting. We mention the ’PhaseLift’ algorithm (Candès, Strohmer and Voroninski [8]) and the ’PhaseCut’ algorithm (Waldspurger, d’Aspremont and Mallat [40]) as two of the earliest contributions into this direction. For a comprehensive discussion of numerical aspects of phase retrieval as well as a detailed overview of the literature we refer to the recent survey [17].

1.1 Aim of This Paper & Related Results

In this paper we consider a particular instance of a phase retrieval problem, namely where the linear operator is the short-time Fourier transform with Gaussian window. Precisely, this means that \(T={\mathcal {G}}\), the Gabor transform defined by
$$ {\mathcal {G}}f(x,y) = 2^{1/4} \int _{\mathbb R}f(t) e^{-\pi (t-x)^2} e^{-2\pi i y t}\,\text{ d }t,\quad f\in L^2({\mathbb R}),\,(x,y)\in \Omega , $$
with \(\Omega \subseteq {\mathbb R}^2\). Evaluations of the Gabor transform correspond to correlation between f and time-frequency shifts of a Gaussian
$$ {\mathcal {G}}f(x,y) = \langle f, \pi (\lambda ) \varphi \rangle _{L^2({\mathbb R})},\quad \lambda =(x,y) $$
where \(\varphi :=2^{1/4} e^{-\pi \cdot ^2}\) and \(\pi (\lambda )\) denotes a time-frequency shift, i.e.,
$$ (\pi (\lambda )g)(t) = e^{2\pi i yt} g(t-x). $$
The squared modulus of the Gabor transform is called the (Gabor) spectrogram and denoted by \({\mathcal {S}}f=|{\mathcal {G}}f|^2\). The set \(\Omega \) represents the locations where phaseless information is given. If \(\Omega = {\mathbb R}^2\) (in fact, \(\Omega \) open suffices) it is well known that any \(f\in L^2({\mathbb R})/\sim \) is uniquely determined by \(\{{\mathcal {S}}f(\omega ), \omega \in \Omega \}\). Contrary to that, the uniqueness question for discrete sets \(\Omega \) is already quite subtle. While sampling on a lattice (irrespective of its density!) is not enough [3, 27], the union of three suitable perturbations of a sufficiently dense lattice yields uniqueness [28].
Phase retrieval is inherently unstable in infinite dimensions [1, 6] which aggravates matters even more. In the above setting this means that the left inverse of \(f\mapsto {\mathcal {S}}f\) (presuming that it exists) cannot be a well-behaved smooth function. Sources of instabilities are functions \(f=f_1+f_2\) consisting of two (or more) components: we say that \(f_1\) and \(f_2\) are components if their respective Gabor transforms are essentially supported on disjoint domains (in particular, \({\mathcal {G}}f_1 \overline{{\mathcal {G}}f_2}\approx 0\)). In this case we can flip the sign in front of either component and get similar phaseless observations:
$$\begin{aligned} |{\mathcal {G}}(f_1-f_2)|^2&= |{\mathcal {G}}f_1|^2-2{\text {Re}}({\mathcal {G}}f_1 \overline{{\mathcal {G}}f_2}) + |{\mathcal {G}}f_2|^2 \\&\approx |{\mathcal {G}}f_1|^2+2{\text {Re}}({\mathcal {G}}f_1 \overline{{\mathcal {G}}f_2}) + |{\mathcal {G}}f_2|^2 = |{\mathcal {G}}f|^2. \end{aligned}$$
On a positive note, functions of aforementioned type are the only source of instability; that is, if the spectrogram is connected, then phase information is stably determined by the phaseless observation [25, 29]. Moreover the stability is measured in terms of quantity that depends on a ”Cheeger constant” that measures the quality of the connectedness (see Remark 2.5 below).
In terms of reconstruction from phaseless Gabor measurements, to the best of our knowledge there is only one result into this direction at this moment in time. For signals residing in the shift-invariant space generated by a Gaussian, Grohs and Liehr [26] establish an explicit inversion formula and prove that its discretization leads to a stable reconstruction method under suitable connectedness conditions. While we do not remove the connectedness constraint, we would like to stress that the results we present here do not require any hypothesis on the structure of the signal to be reconstructed, which is a major novelty in the field.
Escudero et al. [16] consider the problem of locating the positions of the zeros of the Bargmann transform, which is closely related to our problem as Gabor and Bargmann transform coincide up to normalization (and due to the fact that the Bargmann transform is an entire function, and thus essentially determined by its roots).
Finite-dimensional variants of the STFT phase retrieval problem (also called “Coded Diffraction Pattern” often shortened as CDP measurement model) have also been investigated extensively [2, 4, 15, 20, 30, 35]. However, we are not aware of results that quantitatively relate solutions of those finite dimensional analogues to the solution of a continuous one. A major contribution of this paper is therefore that we establish such quantitative estimates in our case.
As outlined above absence of uniqueness and uniform stability is a real issue for the problem at hand. Furthermore, any practical implementation is restricted to process a finite amount of data (in particular, \(\Omega \) finite) while the problem itself is infinite dimensional. Thus, it is inevitable to compromise on the original objective of actually constructing the left inverse \(\mathfrak {L}\) of
$$\mathcal {A}:f\mapsto {\mathcal {S}}f = |{\mathcal {G}}f|^2.$$
It is therefore the aim of this article to identify a map \(\tilde{\mathfrak {L}}:{\mathbb R}^\Omega \rightarrow L^2({\mathbb R})\) which serves as an approximate left inverse of \(\mathcal {A}\).

1.2 Contribution

In this paper we propose an algorithmic solution to compute an approximate left inverse \(\tilde{\mathfrak {L}}\). More specifically, the algorithm takes spectrogram samples as input, i.e.,
$$ {\mathcal {S}}f(\omega ) = |{\mathcal {G}}f(\omega )|^2,\quad \omega \in \Omega $$
with \(\Omega \subseteq {\mathbb R}^2\) finite, and aims to reconstruct f up to a multiplicative constant of unit modulus. Our method is based on solving two convex programs (CP) both of which can actually be formulated as semidefinite programs (SDP). For the reader who is not so familiar with convex optimization, the prototype of a SDP has the form
$$\begin{aligned} \begin{aligned} \min \quad&\langle X,A_0\rangle _F\\ \text {s.t.} \quad&\langle X,A_k\rangle _F = b_k,\quad k=1,\ldots ,M\\&X\succeq 0 \end{aligned} \end{aligned}$$
with \((A_k)_{k=0}^M\) a family of Hermitian matrices, \(\langle A,B\rangle _F= {{\,\textrm{tr}\,}}(B^H A)\) is the Frobenius inner product and \((b_k)_{k=1}^M\subseteq {\mathbb R}\). We refer to the survey article by Vandenberghe and Boyd [39] on semidefinite programming and highlight the following snippet from the abstract of the aforementioned article to partly explain the popularity of SDP.
Although semidefinite programs are much more general than linear programs, they are not much harder to solve.
Further, the case where the constraints are inequalities \(\langle X,A_k\rangle _F \ge b_k\) also fits in this framework. We point out that it often makes sense to soften the constraints by replacing \(\langle X,A_k\rangle _F = b_k\) by \(|\langle X,A_k\rangle _F-b_k|\le \varepsilon \) with \(\varepsilon \) a positive tolerance parameter, for example when the system is over-determined or in the presence of noise. A SDP of the above type can be easily set up and solved using the CVXPY package in Python.
The reconstruction scheme is accompanied by numerical analysis. Our results guarantee that the proposed method is accurate within the stable regime (that is, if the spectrogram exhibits sufficient connectivity) and in the presence of noise, presuming that sufficient data is provided (i.e., \(\Omega \) is rich enough). Thus, our algorithm joins the rank of provably convergent semi-definite programming-based methods for phase retrieval problems such as the famous ’PhaseLift’ and ’PhaseCut’ algorithms. We point out that – in contrast to the aforementioned contributions – our results hold in a purely deterministic context and also provide solutions to a continuous problem rather than to a discrete anaogue (the CDP measurement model).

1.3 Main Ideas

Before providing some explanation of our approach we introduce the number
$$ \mathfrak {a}:=\frac{1}{\sqrt{2}} = 0.7071\ldots . $$
The square lattice with mesh size \(\mathfrak {a}\) will be denoted by \(\Lambda =\mathfrak {a}{\mathbb Z}^2\). It is important to keep in mind that in practice \(\Lambda \) needs to be finite so as to deal with finite dimensional objects. Later on, \(\Lambda \) will be further intersected with a rectangle of finite length but, in this introductory part this aspect will however be neglected for the sake of simplicity of the argument as will be the potential influence of noise. As indicated below (Fig. 1), the proposed method consists of three sub-steps. The underlying ideas behind the individual pieces will be discussed next.

1.3.1 Reduction to Discrete Phase Recovery Problem

Given \(f\in L^2({\mathbb R})\) we introduce a sequence \(v=(v_\lambda )_{\lambda \in \Lambda } \in {\mathbb C}^\Lambda \) by virtue of
$$ v_\lambda :={\mathcal {G}}f(\lambda ), \quad \lambda \in \Lambda . $$
Even though the index set \(\Lambda \subseteq {\mathbb R}^2\) has a two-dimensional appearance we think of it as a one-dimensional vector. As it is well-known from Gabor frame theory, the infinite vector v carries all the relevant information of the function f to be reconstructed [13, 33, 36, 37]. More explicitly, the relationship between v and f can be described in terms of an expansion formula involving the so-called dual window \(\psi \in L^2({\mathbb R})\):
$$\begin{aligned} f = \sum _{\lambda =(a,b)\in \Lambda } v_\lambda \pi (\lambda )\psi \end{aligned}$$
(1.1)
Thus, it suffices to recover the vector v up to a global phase factor.

1.3.2 Lifting

The term ’lifting’ refers to the idea of embedding the vector space \({\mathbb C}^\Lambda \) into the matrix space \({\mathbb C}^{\Lambda \times \Lambda }\). We consider the infinite matrix \(v\otimes \overline{v}\in {\mathbb C}^{\Lambda \times \Lambda }\), which is defined by
$$ (v\otimes \overline{v})_{\lambda ,\lambda '} :=v_\lambda \overline{v_{\lambda '}} = {\mathcal {G}}f(\lambda )\overline{{\mathcal {G}}f(\lambda ')},\quad \lambda ,\lambda '\in \Lambda . $$
as an alternative representation of the vector v. The matrix entry \((v\otimes \overline{v})_{\lambda ,\lambda '}\) contains the information of relative phase change between the locations \(\lambda \) and \(\lambda '\). Note that \(v\otimes \overline{v}\) determines v uniquely up to a multiplicative constant of unit modulus – \({\textrm{span} \,}\{v\}\) is the solitary nontrivial eigenspace of the matrix \(v\otimes \overline{v}\). Thus, this approach neatly fits into the scope of phase retrieval. The lifting trick is well-known within the phase retrieval community and has proven to be quite useful, for instance to turn a given phase retrieval problem (which is non-convex by nature) into a convex problem by considering a relaxation in the matrix space.

1.3.3 Matrix Estimator

We are left with the problem of determining the entries of the matrix \(v\otimes \overline{v}\) given spectrogram data \(({\mathcal {S}}f(\omega ))_{\omega \in \Omega }\). In the best case (that is, if \(\Omega \supseteq \Lambda \)) we can directly observe the diagonal entries of the matrix as
$$ (v\otimes \overline{v})_{\lambda ,\lambda } = {\mathcal {S}}f(\lambda ), $$
while no off-diagonal entries are provided. To go beyond the diagonal we construct a predictor function V based on holomorphically extending the spectrogram. In Sect. 6.1 we prove that there exist \(L:{\mathbb R}^2\times {\mathbb R}^2\rightarrow {\mathbb C}^2\) and \(Q:{\mathbb R}^2\times {\mathbb R}^2 \rightarrow {\mathbb C}\) such that
$$\begin{aligned} {\mathcal {G}}f(p+u) \overline{{\mathcal {G}}f(p)} = F(L(p,u)) \cdot e^{Q(p,u)},\quad p,u\in {\mathbb R}^2 \end{aligned}$$
(1.2)
where \(F:{\mathbb C}^2\rightarrow {\mathbb C}\) is the unique entire extension of \({\mathcal {S}}f\) (that is, \(F\in \mathcal {O}({\mathbb C}^2)\) and \(F\big |_{{\mathbb R}^2}={\mathcal {S}}f\)). Assuming that F is known, it follows from (1.2) that
$$ V(\lambda ,\lambda ') = F(L(\lambda ',\lambda -\lambda ')) \cdot e^{Q(\lambda ',\lambda -\lambda ')} $$
correctly predicts the entries of the matrix \(v\otimes \overline{v}\). In practice it is not viable to identify F exactly. To obtain a feasible variant of the idea we will pick a finite-dimensional Ansatz space \(\{F_A,\,A\succeq 0\}\) whose members are parameterized by positive definite matrices of a fixed size. Further, we formulate a semi-definite program over \(\{A\succeq 0\}\), the convex cone of positive definite matrices such that its solution \(A_*\) gives rise to an entire function \(F_{A_*}\) which serves as an approximation for F. The resulting predictor is then
$$ \tilde{V}(\lambda ,\lambda ') = F_{A_*}(L(\lambda ',\lambda -\lambda ')) \cdot e^{Q(\lambda ',\lambda -\lambda ')} $$
Analyzing the accuracy of the estimator reveals that \(\tilde{V}(\lambda ,\lambda ')\) is a reliable approximation for \((v\otimes \overline{v})_{\lambda ,\lambda '}\) if
$$ (\lambda ,\lambda ') \in \mathcal {P}:=\{(\lambda ,\lambda ')\in \Lambda \times \Lambda ,\, |\lambda -\lambda '|\le r\}, $$
with \(r>0\) a suitable threshold parameter, whereas accuracy cannot be guaranteed when \(|\lambda -\lambda '|>r\). Hence, \(\tilde{V}\) can only be used to predict relatively few matrix entries.

1.3.4 Matrix Completion

That matrix completion techniques can be quite useful in the context of solving phase retrieval problems and is well documented [7]. For the problem at hand, to infer the remaining entries of the matrix \(v\otimes \overline{v}\) we exploit the structural knowledge we have on that matrix: \(v\otimes \overline{v}\) has rank one and is positive semi-definite. We attack the matrix completion problem by solving the convex relaxation
Find \(Y\succeq 0\) s.t. \(|Y_{\lambda ,\lambda '}-\tilde{V}(\lambda ,\lambda ')| \le \tau \), for all \((\lambda ,\lambda ')\in \mathcal {P}\),
with \(\tau >0\) a suitable threshold. Quite conveniently, the relaxation can be formulated in terms of a SDP. We pick up a result due to Demanet and Jugnon [14] (and establish a slight modification) to show that the convex relaxation is guaranteed to yield an accurate result provided that the data exhibit sufficient connectivity. The precise quantitative concept to measure connectivity is the spectral gap of the (vertex-weighted) graph Laplacian associated to f (see Definition 2.4).

1.4 Outline

The remainder of this article is organized as follows. Section 2 contains the precise definition of the reconstruction scheme as well as statements concerning the accuracy of the algorithm, which constitute the main results of this article. In Sect. 3 we present some numerical experiments in order to verify the capabilities of the proposed algorithmic solution empirically. Section 4 forms a collection of required preliminary material. In Sect. 5 we discuss an inexact version of the dual window reconstruction formula (1.1). Section 6 is concerned with a detailed analysis of the estimator \(\tilde{V}\). Finally, Sect. 7 contains the proof of the first main theorem, Theorem 2.8.

2 Algorithm and Results

2.1 The Reconstruction Scheme

This section is concerned with presenting the algorithmic solution of the phase retrieval problem at hand. As the method is quite involved we need to introduce a few objects and quantities first.
Throughout \(\mathfrak {a}=\frac{1}{\sqrt{2}}\). The signal to be reconstructed is represented by \(f\in L^2({\mathbb R})\). \(\Gamma \subseteq \mathfrak {a}{\mathbb Z}^2\) denotes a finite subset of the lattice. Furthermore, we denote the cone of positive definite matrices by \(\mathfrak {A}_+(\Gamma )=\{A\in {\mathbb C}^{\Gamma \times \Gamma }:\, A\succeq 0\}\).
We will write \(\mathcal {O}({\mathbb C}^d)\) for the space of holomorphic functions of d complex variables of the full space \({\mathbb C}^d\) (\(d=2\), for most of the paper). We will not use advanced multivariable complex analysis and only basic knowledge of one complex variable analysis is sufficient for reading this paper.
Definition 2.1
(Ansatz function) Let \(\mathcal {J}=\begin{pmatrix} 0& 1\\ -1& 0 \end{pmatrix}.\). Given \(\lambda ,\mu \in {\mathbb R}^2\), let
$$ \Phi _{\lambda ,\mu }(z):= C(\lambda ,\mu ) e^{i\pi z^T \mathcal {J} (\lambda -\mu )} \cdot e^{-\pi \big ( z-\frac{\lambda +\mu }{2}\big )^2}, \quad z\in {\mathbb C}^2, $$
where \(C(\lambda ,\mu ):= \exp \left\{ -\frac{\pi }{4} \big |\lambda -\mu \big |^2 +\pi i (\lambda _1\lambda _2-\mu _1\mu _2)\right\} \). To every \(A\in {\mathbb C}^{\Gamma \times \Gamma }\) we associate the entire Ansatz function \(F_A\) defined by
$$ F_A(z):=\sum _{\lambda ,\mu \in \Gamma } A_{\lambda ,\mu }\cdot \Phi _{\lambda ,\mu }(z),\quad z\in {\mathbb C}^2. $$
The prediction process involves evaluating a certain entire function of two complex variables.
Definition 2.2
(Evaluation operator) Let
$$\begin{aligned} L:&{\left\{ \begin{array}{ll} {\mathbb R}^2 \times {\mathbb R}^2 & \rightarrow {\mathbb C}^2\\ (p,u) & \mapsto p + \frac{1}{2} \begin{pmatrix} 1 & -i\\ i & 1 \end{pmatrix} u \end{array}\right. } \\ Q:&{\left\{ \begin{array}{ll} {\mathbb R}^2 \times {\mathbb R}^2 & \rightarrow {\mathbb C}\\ (p,u) & \mapsto -\frac{\pi }{2} |u|^2-\pi i(2p_1+u_1)u_2 \end{array}\right. } \end{aligned}$$
Given \(G\in \mathcal {O}({\mathbb C}^2)\), the evaluation of G is then defined by
$$ {\mathcal {E}}[G](p,u):= G(L(p,u))\cdot e^{Q(p,u)}. $$
We shall denote the canonical dual window of the Gabor frame \((\pi (\lambda )\varphi )_{\lambda \in \mathfrak {a}{\mathbb Z}^2}\) by \(\psi \). Further, we point out that an explicit formula for \(\psi \) is available, see Eq. (4.9).
With this we are ready to formulate the reconstruction scheme which is handed noisy spectrogram samples \( \sigma _\omega \approx {\mathcal {S}}f(\omega ), \omega \in \Omega \) with \(\Omega \subseteq {\mathbb R}^2\) finite and aims to reconstruct f up to a global phase factor.
Remark 2.3
(CPs are SDPs) Both, the CP in step 1 as well as the CP in step 2 can actually be recast into a SDP.
Indeed, to see this for the CP in step 1 we refer to Lemma 6.8, where we explicitly construct for any given point \(z=(x,y)\in {\mathbb R}^2\), a hermitian matrix \(W_z\in \mathfrak {A}_+(\Gamma )\) with the property
$$ F_A(z) = \langle A, W_z\rangle _F, \quad A\in \mathfrak {A}_+(\Gamma ). $$
Furthermore, we add a single element ’\(\infty \)’ to the index set \(\Gamma \) and denote \(\tilde{\Gamma }=\Gamma \cup \{\infty \}\). With this, we introduce a slack variable \(\mu \in {\mathbb R}\) by considering
$$ \tilde{A} = \left( \begin{array}{c|c} A & *\\ \hline *& \mu \end{array} \right) \in \mathfrak {A}_+(\tilde{\Gamma }). $$
It is not difficult to see that A is a solution of (2.1) if and only if \(\tilde{A}\) is a solution of the SDP
$$\begin{aligned} \begin{aligned} \min _{\tilde{A}\in \mathfrak {A}_+(\tilde{\Gamma })} \mu&= \min _{\tilde{A}\in \mathfrak {A}_+(\tilde{\Gamma })} \Big \langle \tilde{A}, \left( \begin{array}{c|c} 0 & 0\\ \hline 0 & 1 \end{array} \right) \Big \rangle _F\\ \text {s.t. } \,&\left| \Big \langle \tilde{A}, \left( \begin{array}{c|c} W_\lambda & 0\\ \hline 0 & 0 \end{array} \right) \Big \rangle _F - \sigma _\lambda \right| \le \varepsilon , \quad \lambda \in \Omega \\&\langle \tilde{A}, D_\gamma \rangle \le 0, \quad \gamma \in \Gamma \end{aligned} \end{aligned}$$
where the diagonal matrix \(D_\gamma \) is given by
$$D_\gamma (\alpha ,\beta ) = {\left\{ \begin{array}{ll} -1,\quad & \alpha =\beta =\infty \\ 1, & \alpha =\beta =\gamma \\ 0, & \text{ otherwise. } \end{array}\right. }$$
For the corresponding statement with regards to the CP in step 2, we refer to Lemma 4.1, where we explicitly construct, given an arbitrary point \(p\in \Lambda \times \Lambda \), a pair of hermitian matrices \(E_p^r\) and \(E_p^i\) with the property that for each \(A\in \mathfrak {A}_+(\Lambda )\),
$$ \langle A, E_p^r\rangle _F = {\text {Re}}(A_p), \quad \text {and} \quad \langle A, E_p^i\rangle _F = {\text {Im}}(A_p). $$
Therefore, every solution of the SDP
$$\begin{aligned} \begin{aligned} \min _{Y\in \mathfrak {A}_+(\Lambda )}&\langle Y, 0\rangle _F\\ \text {s.t.} \quad&|\langle Y, E_p^r \rangle _F - {\text {Re}}T_p| \le \frac{\varepsilon '}{2}, \quad p\in \mathcal {P}\\&|\langle Y, E_p^i \rangle _F - {\text {Im}}T_p| \le \frac{\varepsilon '}{2}, \quad p\in \mathcal {P} \end{aligned} \end{aligned}$$
is also a solution of the CP (2.2). Finally, we point out that off-the-shelf solvers (in particular, the CVXPY package which was used to produce the experiments carried out in Sect. 3) deal with these aspects automatically and internally. For example, to solve the CP (2.1) we simply declared \(\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }\) as the objective function to be minimized.

2.2 Reconstruction of Gabor Coefficients

The intermediate objective of the algorithm is concerned with the reconstruction of the coefficients \(\{{\mathcal {G}}f(\lambda ),\lambda \in \Lambda \}\). Before we present the result which guarantees the accuracy of this first component of the method we again have to settle some terminology.
First we attach a graph to the provided spectrogram samples in order to specify the relevant quantity of connectivity.
Definition 2.4
(Signal associated graph and spectral gap) Given a triple consisting of \(f\in L^2({\mathbb R})\), a finite set \(\Lambda \subseteq \mathfrak {a}{\mathbb Z}^2\) and \(r>0\), the signal associated graph \(G=(V,E,\alpha )\) is the vertex weighted graph with vertex set \(V=\Lambda \), edge set E defined by
$$ (u,v)\in E \quad \Leftrightarrow \quad 0<|u-v|<r, $$
and vertex weights \(\alpha _v={\mathcal {S}}f(v)\), \(v\in V\). Let \(0=\lambda _1\le \lambda _2 \le \ldots \) denote the eigenvalues of the Laplacian1 of G. The spectral gap is defined as
$$ \lambda _2(f,\Lambda ,r):= \lambda _2. $$
Remark 2.5
(Spectral gap and connectivity) We recall in Lemma 4.15 that G is connected if and only if there is an actual spectral gap, i.e. if \(\lambda _2\) is strictly positive. This suggests to conceive the spectral gap \(\lambda _2\) as a concept to quantify the connectivity of G: the larger the spectral gap of the Laplacian, the more connected is G.
A formal approval of this intuition has been established by Cheeger [9, pages 195–200] in the context of Riemannian geometry by relating the spectral gap to a geometric invariant coined the Cheeger constant. Subsequently, respective results have also been encountered in the world of graph theory [11].
In the remainder we will stick to index sets \(\Lambda ,\Gamma ,\Omega \) of a rather simple form. Namely, we assume that these sets arise as the intersection between a centered rectangle with a square lattice. Precisely, given numbers \(T,S,R,s>0\) we set
$$\begin{aligned} \Lambda&= ([-T,T]\times [-S,S]) \cap \mathfrak {a}{\mathbb Z}^2, \end{aligned}$$
(2.1)
$$\begin{aligned} \Omega&= ([-T-R,T+R]\times [-S-R,S+R]) \cap s{\mathbb Z}^2, \end{aligned}$$
(2.2)
$$\begin{aligned} \Gamma&= ([-T-2R,T+2R]\times [-S-2R,S+2R])\cap \mathfrak {a}{\mathbb Z}^2. \end{aligned}$$
(2.3)
The parameters need to be chosen reasonably for the reconstruction method to succeed. For this purpose we introduce the following technical notion.
Definition 2.6
Let \(f\in L^2({\mathbb R})\). Given parameters \(T,S,R,r,s,\varepsilon ,\varepsilon >0\) let \(\Lambda ,\Gamma ,\Omega \) be defined as in (2.1), (2.2) and (2.3), respectively.
We say that the parameters are well calibrated if all of the following relations hold true:
$$\begin{aligned} \varepsilon&\le \left[ \frac{e^{-\frac{17\pi }{32}r^2}}{1.33\times 10^5} \times \min \left\{ \frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{|\Lambda |^2}, \frac{ \lambda _2(f,\Lambda ,r)}{192 r^2 }\right\} \right] ^2, \end{aligned}$$
(2.4)
$$\begin{aligned} \varepsilon '&=(3.1\times 10^4) \sqrt{\varepsilon } e^{\frac{17\pi }{32}r^2}, \end{aligned}$$
(2.5)
$$\begin{aligned} s&\le \frac{0.3}{\sqrt{ \ln \left( \frac{2}{3\varepsilon } \right) }}, \end{aligned}$$
(2.6)
$$\begin{aligned} R&\ge \max \left\{ 2.1+0.9 \sqrt{\ln \left( \frac{1}{\varepsilon }\right) }, \frac{r+s^{-1}}{2} \right\} \end{aligned}$$
(2.7)
Remark 2.7
(Number of required samples) The parameters TSr are considered to be fixed numbers. Typically \(r\in [0,2]\), and expressions involving r in (2.4)–(2.7) may just be regarded to be constants. The parameter \(\varepsilon \) plays the role of the desired error margin. From a qualitative perspective (neglecting numerical constants) we get that the required number of samples in the “well-calibrated regime” is roughly (asymptotically for \(\varepsilon \) close to 0)
$$\begin{aligned} |\Omega |&= \Big | s{\mathbb Z}^2 \cap ([-T-R,T+R] \cap [-S-R,S+R])\Big | \\&\asymp \frac{(T+R)(S+R)}{s^2}\\&\asymp \ln \left( \frac{2}{3\varepsilon }\right) \cdot \left( T + \sqrt{\ln \left( \frac{1}{\varepsilon }\right) } \right) \cdot \left( S + \sqrt{\ln \left( \frac{1}{\varepsilon }\right) } \right) \end{aligned}$$
Hence, \(|\Omega | = \mathcal {O}(\ln ^2 \frac{1}{\varepsilon })\) as \(\varepsilon \rightarrow 0\).
The first main result states that the proposed method recovers the samples \(({\mathcal {G}}f(\lambda ))_{\lambda \in \Lambda }\) accurately under appropriate assumptions and reads as follows.
Theorem 2.8
Let \(f\in L^2({\mathbb R})\) be such that \(\Vert {\mathcal {S}}f\Vert _{L^\infty }\le 1\).2 Suppose that \(T,S,R,r,s,\varepsilon ,\varepsilon '\) are well calibrated. Let \(\sigma \in {\mathbb R}^{\Omega }\) be such that
$$ \Vert \sigma - {\mathcal {S}}f\Vert _{\ell ^\infty (\Omega )} \le \frac{\varepsilon }{2}. $$
Then it holds that both convex problems in Algorithm 1 are feasible. Moreover, the top eigenvector \(v\in {\mathbb C}^\Lambda \) satisfies that
$$\begin{aligned} & \min _{\theta \in {\mathbb R}} \left| \sqrt{{{\,\textrm{tr}\,}}(Y)} v - e^{i\theta } ({\mathcal {G}}f(\lambda ))_{\lambda \in \Lambda } \right| \nonumber \\ & \quad \le 177 e^{0.84 r^2} \cdot \left( 1+20 r \sqrt{\frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \cdot \root 4 \of {\varepsilon }. \end{aligned}$$
(2.8)
The proof of Theorem 2.8 can be found in Sect. 7.
We would like to stress that the main contribution of this paper is that our algorithm offers predictable error bounds. According to Theorem 2.8 the accuracy of the prediction is driven by two quantities:
– The noise level \(\varepsilon \). Our analysis unfortunately leads to a rather pessimistic power \(\varepsilon ^{1/4}\), but our numerical simulation below show that the algorithm performs better. It is also likely that, under some signal models, the dependence on noise can be brought down to a more acceptable level.
– The quantity
$$\begin{aligned} C_{stab}(f,r):=e^{0.84r^2}\left( 1+ 20r\sqrt{\frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \end{aligned}$$
which depends on the connectivity of the support of \({\mathcal {G}}f\) through the associated graph. The statement is empty if \(\lambda _2(f,\Lambda ,r)=0\), i.e., if the associated graph is not connected. For a generic \(f\in L^2({\mathbb R})\), we have that \({\mathcal {G}}f\) does not vanish on the finite set \(\Lambda \). Thus – provided that \(r\ge \mathfrak {a}\) – in general (for generic f) we have that the graph is indeed connected.
Over all, our estimates are all conservative, both in the dependence on noise level, on connectivity and with respect to the numerical constants appearing here (that we decided to keep explicit rather than O’s). These are the outcomes of the proofs, the fact that many steps were needed, none of which is totally optimal. Though slight improvements would be possible, this would have been at the expense of the length and readability of the paper. As far as we are aware of, for all phase retrieval algorithms that are currently available, either they offer no convergence guaranties or convergence only occurs with high probability and, in all cases, the provable estimates are far too pessimistic and involve large numerical constants (often so bad that their explicit value is not given). However, running those algorithms, the real situation is often much better than what the mathematics is able to predict. Unfortunately, this drawback is also present here.
Remark 2.9
(On the choice of r) The performance of the reconstruction method may crucially depend on the choice of r. Given f, the optimal3 choice is
$$r=r_*(f):={{\,\textrm{argmin}\,}}_{r>0} C_{stab}(f,r).$$
It is important to note that \(r_*(f)\) can be computed if samples \(({\mathcal {S}}f(\lambda ))_{\lambda \in \Lambda }\) are given: the Laplacian (and therefore also the spectral gap \(\lambda _2(f,\Lambda ,r)\)) depends solely on \(({\mathcal {S}}f(\lambda ))_{\lambda \in \Lambda }\) and moreover as \(r\mapsto \lambda _2(f,\Lambda ,r)\) is piecewise constant, it suffices to check a finite number of candidates r (cf. Figure 2 for the three smallest choices).
Note also that r intervenes in two ways in \(C_{stab}(f,r)\): for small r, the dominating term will be the factor \(\lambda _2(f,\Lambda ,r)^{-1}\) while for large r, the exponential term \(e^{0.84r^2}\) will take over. In practice, we have not found it usefull to go beyond \(r=2\) (Fig. 3).
Remark 2.10
(Comparison with existing stability results) Next we put the above result into perspective with earlier stability results for the Gabor phase retrieval problem. More specifically, with the outcomes of [25, 29], where it has been shown that on a domain \(\Omega \subseteq {\mathbb R}^2\) on which \(|{\mathcal {G}}f|\) is connected, phase information is stably encoded in the phase-less data \(|{\mathcal {G}}f|\). Even though inequality (2.8) is of discrete type, qualitatively it is very much reminiscent of the aforementioned results: The results in the continuous setting quantify connectivity in terms of the Poincaré constant which is known to coincide with the spectral gap of the associated diffusion operator, a generalization of the Laplacian. We refer to [5, Section 2] for more details. Similarly to this – albeit on a purely discrete level – we employ the spectral gap of the corresponding graph Laplacian. To state the obvious, the substantial advantage of the novel results of this article is that we have a mean to actually recover phase information.

2.3 Reconstruction from Incomplete Gabor Coefficients

Given \(\Lambda \subseteq \mathfrak {a}{\mathbb Z}^2\) we formally define an operator \(\mathcal {R}_\Lambda :{\mathbb C}^\Lambda \rightarrow L^2({\mathbb R})\) by
$$\begin{aligned} \mathcal {R}_\Lambda (c) = \sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )\psi , \quad c=(c_\lambda )_{\lambda \in \Lambda } \end{aligned}$$
(2.9)
where \(\psi \) denotes the dual window of the Gabor frame \((\pi (\lambda )\varphi )_{\lambda \in \mathfrak {a}{\mathbb Z}^2}\). If \(\Lambda =\mathfrak {a}{\mathbb Z}^2\) and \(c=({\mathcal {G}}f(\lambda ))_{\lambda \in \Lambda }\) the sum in (2.9) converges unconditionally in \(L^2({\mathbb R})\) and exactly reconstructs f, that is, \(f=\mathcal {R}_\Lambda (c)\), cf. Section 4.3.
In practice, one only has a finite number of possibly noisy samples at one’s disposal. It is the purpose of the next result to control the reconstruction error in this inexact setting. As before, given \(T,S>0\) let us denote
$$ \Lambda = ([-T,T]\times [-S,S])\cap \mathfrak {a}{\mathbb Z}^2. $$
Given the fact that only a limited amount of samples is available (particularly, \(\Lambda \subseteq [-T,T]\times \mathbb {R}\)) one cannot expect that f is to be accurately reconstructed outside the interval \([-T,T]\). Moreover, given that there is no information about large frequencies (\(>S\)) it is to be expected that the reconstruction is accurate only if the function to be reconstructed is relatively smooth.
Before we state the result we introduce two quantities. Given \(f\in L^2({\mathbb R})\) we use
$$ \eta (f):=\sup _{x\in {\mathbb R}} \Vert f\cdot T_x\varphi ^{\frac{1}{2}}\Vert _{L^2} = 2^{\frac{1}{8}} \sup _{x\in {\mathbb R}} \left( \int _{\mathbb R}|f(t)|^2 e^{-\pi (t-x)^2}\,\text{ d }t\right) ^{1/2}, $$
to capture the maximal local \(L^2\)-energy of f. Given \(S>0\), let
$$\begin{aligned} \kappa _S(f) :=\sup _{x\in {\mathbb R}} \left( \int _{|\omega |>S} |{\mathcal {G}}f(x,\omega )|^2\,\text{ d }\omega \right) ^{1/2} =\sup _{x\in {\mathbb R}} \Vert {\mathcal {F}}\{f e^{-\pi (\cdot -x)^2}\}\Vert _{L^2([-S,S]^c)}. \end{aligned}$$
The quantity \(\kappa _S(f)\) may be understood as a measure of smoothness of f: If \(\kappa _S(f)\) is negligibly small, then all Gaussian localizations \(\{f e^{-\pi (\cdot -x)^2}, x\in {\mathbb R}\}\) are close to bandlimited (with bandwidth 2S), and vice versa.
The second main result reads as follows.
Theorem 2.11
Let \(T,S\in \mathfrak {a}{\mathbb N}\), let \(0<\tau <T\). For all \(f\in L^2({\mathbb R})\) and for all \(c\in {\mathbb C}^\Gamma \) it holds that
$$\begin{aligned} & \Vert f-\mathcal {R}_\Lambda (c)\Vert _{L^2(-\tau ,\tau )} \\ & \quad \le 6.82 \left( \Vert {\mathcal {G}}f- c\Vert _{\ell ^2(\Lambda )} +\sqrt{T+1} \cdot \kappa _S(f) + \sqrt{\tau +1} e^{-\frac{\pi }{\sqrt{2}} (T-\tau )}\cdot \eta (f) \right) . \end{aligned}$$
For the proof of Theorem 2.11 we refer to Sect. 5. There are three different terms contributing to the upper bound on the reconstruction error.
  • The first term is simply the noise on the provided samples.
  • Recall that \(\Lambda \subseteq [-T,T]\times [-S,S]\). This means that the reconstruction method only draws on Gabor samples located in the strip \({\mathbb R}\times [-S,S]\) while neglecting information from samples outside the strip. Morally, if there is only little contribution from samples outside the strip (that is, if \(\kappa _S(f)\) is small) then neglecting these samples does not affect the accuracy too much.
  • The decisive factor in the third term is the exponential \(e^{-\frac{\pi }{2} (T-\tau )}\). Since we want to accurately reconstruct on the interval \((-\tau ,\tau )\) we require that \(T>\tau \); clearly, the reconstruction accuracy will benefit from being fed more information (i.e., increasing T). The estimate shows that the improvement on the corresponding error term is exponential with respect to the offset \(T-\tau \).

2.4 Reconstruction from Spectrogram Samples

The content of the main result is to clarify under which conditions and in which sense \(f_*\), the output of Algorithm 1 is a useful estimate for f.
Theorem 2.12
Let \(T,S\in \mathfrak {a}{\mathbb N}\) and let \(0<\tau <T\). Furthermore, suppose the assumptions of Theorem 2.8. Then it holds that \(f_*\), the output of Algorithm 1 satisfies that
$$\begin{aligned} & \min _{\theta \in {\mathbb R}} \Vert f_*-e^{i\theta }f\Vert _{L^2(-\tau ,\tau )} \nonumber \\ & \quad \le 18 \Bigg [ 177 e^{0.84 r^2} \cdot \left( 1+20 r \sqrt{\frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \cdot \root 4 \of {\varepsilon }\nonumber \\ & \quad + (2T+2) \kappa _S(f) + 2\sqrt{S}(\tau +1)e^{-\frac{\pi }{2}(T-\tau )} \Bigg ]. \end{aligned}$$
(2.10)
The result is essentially a direct consequence of Theorem 2.8 and Theorem 2.11 and the proof will be provided right away at this stage.
Proof
Let \(c\in {\mathbb C}^\Lambda \) be defined by
$$ c_\lambda :=\sqrt{{{\,\textrm{tr}\,}}(Y)} v_\lambda ,\quad \lambda \in \Lambda . $$
As per Theorem 2.8 there exists \(\theta _0\in {\mathbb R}\) such that
$$\begin{aligned} |c-e^{i\theta _0}({\mathcal {G}}f(\lambda ))_{\lambda \in \Lambda }| \le 177 e^{0.84 r^2} \cdot \left( 1+20 r \sqrt{\frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \cdot \root 4 \of {\varepsilon }. \end{aligned}$$
(2.11)
We continue by estimating
$$\begin{aligned} \min _{\theta \in {\mathbb R}}\Vert f_*-e^{i\theta }f\Vert _L^2(-\tau ,\tau )&= \min _{\theta \in {\mathbb R}}\Vert \mathcal {R}_\Lambda (c)-e^{i\theta }f\Vert _{L^2(-\tau ,\tau )}\\&\le \Vert \mathcal {R}_\Lambda (c)-e^{i\theta _0}f\Vert _{L^2(-\tau ,\tau )}\\&= \Vert \mathcal {R}_\Lambda (e^{-i\theta _0}c)-f\Vert _{L^2(-\tau ,\tau )}. \end{aligned}$$
As per Theorem 2.11 we have that the right hand side is bounded from above by
$$\begin{aligned} 18 \left( \Vert {\mathcal {G}}f- e^{-i\theta _0}c\Vert _{\ell ^2(\Lambda )} +\sqrt{T+1} \cdot \kappa _S(f) + (\tau +1) e^{-\frac{\pi }{2} (T-\tau )}\cdot \eta (f) \right) . \end{aligned}$$
We claim that
$$\begin{aligned} \eta (f) \le 2 \sqrt{S}+ \kappa _S(f). \end{aligned}$$
(2.12)
With this and with (2.11) we then get that
$$\begin{aligned} & \min _{\theta \in {\mathbb R}} \Vert f_*-e^{i\theta }f\Vert _{L^2(-\tau ,\tau )} \le 18 \Bigg [ 177 e^{0.84 r^2} \cdot \left( 1+20 r \sqrt{\frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \cdot \root 4 \of {\varepsilon }\\ & \quad + (\sqrt{T+1} +(\tau +1)e^{-\frac{\pi }{2}(T-\tau )})\cdot \kappa _S(f) + 2\sqrt{S}(\tau +1)e^{-\frac{\pi }{2}(T-\tau )} \Bigg ] \end{aligned}$$
Notice that
$$ \sqrt{T+1}+(\tau +1)e^{-\frac{\pi }{2} (T-\tau )}\le 2T+2, $$
which implies the desired estimate (2.10).
To establish (2.12) take \(v\in {\mathbb R}\) arbitrary. Since \({\mathcal {G}}f(v,\cdot )={\mathcal {F}}\{f e^{-\pi (\cdot -v)^2}\}\) it follows from Plancherel’s formula that
$$\begin{aligned} \Vert f e^{-\pi (\cdot -v)^2}\Vert _{L^2}^2 = \Vert {\mathcal {G}}f(x,\cdot )\Vert _{L^2}^2 \le 2S + \kappa _S(f)^2, \end{aligned}$$
(2.13)
where we used that \(\Vert {\mathcal {G}}f\Vert _{L^\infty }\le 1\) by assumption. Since
$$ \int _{\mathbb R}e^{-2\pi (t-u)^2} e^{-2\pi u^2}\,\text{ d }u = \frac{1}{2} e^{-\pi t^2}, $$
we then get for any \(x\in {\mathbb R}\) (by applying (2.13) to \(v=x+u\)) that
$$\begin{aligned} \int _{\mathbb R}|f(t)|^2 e^{-\pi (t-x)^2}\,\text{ d }t&= 2 \int _{\mathbb R}|f(t)|^2 \left( \int _{\mathbb R}e^{-2\pi (t-x-u)^2-2\pi u^2}\,\text{ d }u \right) \,\text{ d }t\\&= 2 \int _{\mathbb R}\left( \int _{\mathbb R}|f(t)|^2 e^{-2\pi (t-x-u)^2}\,\text{ d }t \right) e^{-2\pi u^2}\,\text{ d }u\\&\le (2S+\kappa _S(f)^2) \cdot 2 \int _{\mathbb R}e^{-2\pi u^2}\,\text{ d }u\\&= \sqrt{2} (2S+\kappa _S(f)^2) \end{aligned}$$
As x was arbitrary, taking square roots implies (2.12), and we are done.

3 Numerical Simulations

First of all, the purpose of the present section is to demonstrate empirically the aptitude of the proposed reconstruction algorithm. Furthermore, we aim to develop some sense for the choice of parameters of the scheme.
We consider the reconstruction of certain random signals. The precise random model is the following: Given \(a>0\) and \(\Xi \subseteq a{\mathbb Z}^2\) finite, we pick a vector \((c_\lambda )_{\lambda \in \Xi } \in {\mathbb C}^\Xi \) at random, with each component chosen independently and according to the uniform distribution on the complex unit disk \(\mathbb {D}\subseteq {\mathbb C}\). The resulting function is then given by
$$ f = \sum _{\lambda \in \Xi } c_\lambda \cdot \pi (\lambda )\varphi . $$
The algorithm takes noisy spectrogram measures as input. Given a noise level \(\nu \ge 0\), the noisy sampled data fed into the algorithm are given by
$$ \sigma _\lambda = {\mathcal {S}}f(\lambda ) + \eta _\lambda ,\quad \lambda \in \Omega , $$
where \((\eta _\lambda )_{\lambda \in \Omega }\) are independently and identically distributed random variables according to uniform distribution on the interval \([-\nu ,\nu ]\). The location of coefficients to be reconstructed is in accordance with the position of generating coefficients (Fig. 4).
The main difference between the two experiments (Fig. 5) is the interval on which f is reconstructed. On the left, reconstruction is fairly accurate on \([-3,3]\), while on the right hand side, we obtain a good approximation for the larger interval \([-4.5, 4.5]\). This comes at the price of having to solve a problem of higher complexity. The bottleneck of the reconstruction scheme lies in identifying \(A_*\), the parametrizing matrix of the (approximate) entire extension of the spectrogram \({\mathcal {S}}f\) (step 1 of Algorithm 1). In this step, one needs to solve a SDP where the variable is a matrix of dimension \(|\Gamma | \times |\Gamma |\). Hence, a moderate enlargement of the set \(\Lambda \) results in substantial extra computational expenses. Comparing the two present examples we register roughly a quadruplication of computation time.
Remark 3.1
(Practicability) The numerical results demonstrate that the reconstruction algorithm performs pretty well in a practically useful parameter range as compared to the range of parameters where the error estimates are theoretically guaranteed (i.e. well calibrated configuration). We have to stress though that the results are very much a product of a trial and error process regarding the selection of suitable parameters. For several instances the algorithm would terminate early as either of the convex problems handed to the solver is found to be infeasible.
We are aware, that from a practical viewpoint there is definitely room for improvement: There is a significant gap between the range of practically viable parameters and the notion of well-configured parameter configurations. A more sophisticated approach on how to choose the parameters involved (other than trial and error) would be of great practical relevance. Moreover, it would be highly desirable to have a variant of the scheme which is robust w.r.t. aforementioned infeasibility issues. For example, a viable pathway could be to incorporate the constraints into the objective function instead of using hard constraints. While this would remove the issue of infeasible programs appearing in the reconstruction procedure right away, it would also require separate numerical analysis tailored to the reformulation of the algorithm. As the focal point of this paper lies on the mathematical analysis of the proposed scheme we do not aim to further expand on this topic.

4 Terminology and Prerequisites

4.1 General Notation

Throughout we will understand \({\mathbb R}^d\) as a subspace of \({\mathbb C}^d\). Given a vector \(p\in {\mathbb C}^d\) its Euclidean length is denoted by |p| and we use the notation \(p^2=p\cdot p = \sum _{k=1}^d p_k^2\). Real and imaginary parts are taken componentwise, i.e.,
$$ {\text {Re}}(p)=({\text {Re}}(p_1),\ldots ,{\text {Re}}(p_d))^T,\quad {\text {Im}}(p)=({\text {Im}}(p_1),\ldots ,{\text {Im}}(p_d))^T. $$
Given a function \(F:{\mathbb R}^d\rightarrow {\mathbb C}\), we use the tensor notation
$${\mathcal {T}}_u[F](p) = F(p+u) \overline{F(p)},\quad p,u\in {\mathbb R}^d.$$
Given a matrix A we denote
$$ \Vert A\Vert _{\max } = \max _{k,\ell } |A_{k,\ell }|. $$
If \(\Lambda \) is a finite index set with \(N=|\Lambda |\) its cardinality, we can identify \({\mathbb C}^{\Lambda \times \Lambda }\) with \({\mathbb C}^{N\times N}\), the vector space of \(N\times N\) square matrices. Linear algebra concepts such as matrix products, matrix–vector multiplication, trace, positive (semi)-definiteness, eigenvalues and eigenvectors are well-defined in \({\mathbb C}^{\Lambda \times \Lambda }\) and will be denoted by the common notation. Moreover, we introduce the Frobenius inner product on \({\mathbb C}^{\Lambda \times \Lambda }\) by
$$ \langle X,Y\rangle _F =\sum _{\lambda ,\lambda '\in \Lambda } X_{\lambda ,\lambda '} \overline{Y_{\lambda ,\lambda '}} = {{\,\textrm{tr}\,}}(Y^H X). $$
Recall that \(\langle X, Y \rangle _F\) is real-valued (resp. non-negative) if XY are hermitian (resp. positive definite). Given vectors \(a,b\in {\mathbb C}^\Lambda \), we use tensor notation \(a\otimes b\) to denote the outer vector product, that is,
$$ (a \otimes b)_{\lambda ,{\lambda '}} = a_\lambda b_{\lambda '}, \quad \lambda ,\lambda '\in \Lambda . $$
The following simple lemma will be handy:
Lemma 4.1
Let \(k,\ell \in \{1,\ldots ,N\}\). and let \(e_k, e_\ell \) denote the k-th and \(\ell \)-th canonical basis vector in \({\mathbb C}^N\). Then, for every \(X\in {\mathbb C}^{N\times N}\) hermitian,
$$ {\text {Re}}(X_{k,\ell }) = \langle X, \frac{1}{2} (e_k\otimes e_\ell + e_\ell \otimes e_k)\rangle _F, $$
and
$${\text {Im}}(X_{k,\ell }) = \langle X, \frac{1}{2i}(e_\ell \otimes e_k - e_k\otimes e_\ell )\rangle _F. $$
Proof
Using that X is hermitian implies that
$$\begin{aligned} {\text {Re}}(X_{k,\ell })= & \frac{1}{2} (X_{k,\ell } + \overline{X_{k,\ell }}) = \frac{1}{2} ( X_{k,\ell } + X_{\ell ,k}) = \frac{1}{2} (e_k^T X e_\ell + e_\ell ^T X e_k) \\= & \frac{1}{2}\bigl ( {{\,\textrm{tr}\,}}([e_\ell \otimes e_k] X) + {{\,\textrm{tr}\,}}([e_k \otimes e_\ell ]X)\bigr ) \end{aligned}$$
The second identity is similar.
For \(F\in \mathcal {O}({\mathbb C})\) an entire function we will employ the notation \(F^*\) to denote the function defined by \(F^*(z)= \overline{F(\bar{z})}\). Note that \(F^*\in \mathcal {O}({\mathbb C})\). Given \(G\in \mathcal {O}({\mathbb C}^d)\), we define the maximum modulus function by
$$ M_G(r):= \sup _{\begin{array}{c} z\in {\mathbb C}^d\\ |{\text {Im}}z|=r \end{array}} |G(z)|, \quad r\ge 0. $$
The Fourier transform of \(f\in L^1({\mathbb R}^d)\) is defined via the integral
$$ {\mathcal {F}}f(\omega ) = \hat{f}(\omega ) = \int \limits _{{\mathbb R}^d} f(t) e^{-2\pi i \omega \cdot t}\,\text{ d }t, $$
and extended to \(L^2({\mathbb R}^d)\) in the usual way. We denote the shift and the modulation operator by T and M, respectively. That is
$$ T_x f(t)= f(t-x), \quad M_\omega f(t) = e^{2\pi i \omega \cdot t}f(t), \quad f\in L^2({\mathbb R}^d). $$
Time-frequency shifts are denoted by \(\pi (z)f=M_\omega T_x f\), where \(z=(x,\omega )\).
The Jacobi \(\vartheta _3\) function which is defined by
$$ \vartheta _3(z,q) = \sum _{k\in {\mathbb Z}} q^{k^2}e^{2ikz}, \quad q\in (0,1), \, z\in {\mathbb C}$$
will be useful on a number of occasions. All values of this function used here have been computed using the EllipticTheta[3,z,q] function on www.wolframalpha.com, in particular, we will use
$$ \vartheta _3(0,e^{-\pi }) =1.08643...\quad ,\quad \vartheta _3(0,e^{-\pi /2})=1.41949... $$
and \(\vartheta _3(0,e^{-\pi /4})^2=4.00005...\) One application is estimating the sum of equidistant shifts of Gaussians.
Lemma 4.2
Let \(b>0\). For all \(t\in {\mathbb R}\) it holds that
$$ \sum _{k\in {\mathbb Z}} e^{-\frac{\pi }{b}(t-k)^2} \le \sqrt{b} \vartheta _3(0,e^{-b\pi }). $$
Proof
We denote \(g(x) = e^{-\frac{\pi }{b} x^2}\) and note that \(\hat{g}(\omega ) = \sqrt{b} e^{-b\pi \omega ^2}\). Applying Poisson summation formula allows us to rewrite
$$ \sum _{k\in {\mathbb Z}} e^{-\frac{\pi }{b}(t-k)^2} = \sum _{k\in {\mathbb Z}} T_t g(k) = \sum _{k\in {\mathbb Z}} M_{-t} \widehat{g}(k) = \sqrt{b} \sum _{k\in {\mathbb Z}} e^{-2\pi i tk } e^{-b\pi k^2}. $$
The triangle inequality then gives
$$ \sum _{k\in {\mathbb Z}} e^{-\frac{\pi }{b}(t-k)^2} \le \sqrt{b}\sum _{k\in {\mathbb Z}} e^{-b\pi k^2} = \sqrt{b} \vartheta _3(0, e^{-b\pi }), $$
as claimed.

4.2 Time Frequency Analysis

The central object of this paper is the following.
Definition 4.3
(Short-time Fourier transform) The short-time Fourier transform(STFT) of \(f\in L^2({\mathbb R}^d)\) with window function \(g\in L^2({\mathbb R}^d)\) is defined by
$$ {\mathcal {V}}_g f(x,\omega ) = \langle f, M_\omega T_x g\rangle _{L^2({\mathbb R}^d)}. $$
Moreover, if
$$ g=\varphi := 2^{d/4}e^{-\pi \cdot ^2}, $$
the Gaussian, we use the notation \({\mathcal {G}}f = {\mathcal {V}}_\varphi f\) and call \({\mathcal {G}}f\) the Gabor transform. The squared modulus of the Gabor transform is called the spectrogram and will be denoted by \({\mathcal {S}}f\), that is, \({\mathcal {S}}f(x,\omega )=|{\mathcal {G}}f(x,\omega )|^2\).
We devote the remainder of this section to collect a couple of properties and statements involving the STFT and refer to Gröchenig’s textbook [22] for more details.
Note that, from Cauchy-Schwarz, we get \(\Vert {\mathcal {S}}f\Vert _{L^\infty }\le \Vert f\Vert _{L^2}^2\). Given \(\lambda =(a,b)\),
$$ {\mathcal {G}}[\pi (\lambda )f](x,\omega ) =e^{-2\pi i a (\omega -b)} \cdot {\mathcal {G}}f (x-a,\omega -b), $$
in particular, \(|{\mathcal {G}}[\pi (\lambda )f](p)| = |{\mathcal {G}}f(p-\lambda )|\). The Gabor transform of the Gaussian \(\varphi \) is given by
$$\begin{aligned} {\mathcal {G}}\varphi (x,\omega ) = e^{-\frac{\pi }{2} (x^2+\omega ^2+2ix\omega )} \end{aligned}$$
(4.1)
so that, for \(z=(x,\omega )\),
$$\begin{aligned} {\mathcal {G}}[\pi (\lambda )\varphi ](x,\omega )= e^{-\pi i(x+a)(\omega -b)} e^{-\frac{\pi }{2} |z-\lambda |^2}. \end{aligned}$$
(4.2)
Later on, we will require the formula
$$\begin{aligned} {\mathcal {F}}\left( {\mathcal {V}}_g f \overline{{\mathcal {V}}_\gamma h}\right) (s,t) = {\mathcal {V}}_h f(-t,s) \overline{{\mathcal {V}}_\gamma g(-t,s)}, \quad f,g,h,\gamma \in L^2({\mathbb R}^d). \end{aligned}$$
(4.3)
which may be found in [31] and in [23].
The Bargmann transform denoted by \(\mathcal {B}\) is defined as
$$ \mathcal {B}f(z) = 2^{1/4} \int _{\mathbb R}f(t) e^{2\pi t z-\pi t^2- \frac{\pi }{2} z^2}\,\text{ d }t. $$
It is an isometry from \(L^2({\mathbb R})\) to the Fock space of entire functions,
$$ \mathcal {F}^2=\{f\,:{\mathbb C}\rightarrow {\mathbb C} \text{ entire } \text{ s.t. } \Vert f\Vert _{{\mathcal {F}}^2}^2:=\int _{{\mathbb C}}|f(z)|^2e^{-\pi |z|^2}\,\text{ d }z<+\infty \}. $$
Bargmann and Gabor transforms are intimitely related by virtue of the identity
$$\begin{aligned} {\mathcal {G}}f(x,-y) = e^{\pi i xy} \mathcal {B}f(z) e^{-\frac{\pi }{2} |z|^2}, \quad z=x+iy\in {\mathbb C}. \end{aligned}$$
(4.4)
The Hermite functions are defined by
$$ h_k(t) = c_k (-1)^k e^{\pi t^2} \left( \frac{\text{ d }}{\text{ d }t} \right) ^k \left( e^{-2\pi t^2}\right) , \quad k\in {\mathbb N}, $$
where \(c_k>0\) is chosen in such a way that \(\Vert h_k\Vert _{L^2}=1\). Then, \((h_k)_{k\in {\mathbb N}}\) forms an orthonormal basis of \(L^2({\mathbb R})\). The Bargmann transform maps the Hermite functions to monomials of the corresponding degree, that is,
$$\begin{aligned} \mathcal {B}h_k(z) = \sqrt{\frac{\pi ^k}{k!}} z^k, \quad k\in {\mathbb N}. \end{aligned}$$
(4.5)
Equation (4.3) implies that Fourier transform of the spectrogram has Gaussian decay. The following lemma reveals that this is still the case after applying a Gaussian cut-off to the spectrogram.
Lemma 4.4
Let \(f\in L^2({\mathbb R})\), then it holds for all \(\tau \in {\mathbb R}^2\) that
$$\begin{aligned} \big | {\mathcal {F}}\big ( {\mathcal {S}}f \cdot e^{-\frac{\pi }{8}|\cdot -\tau |^2}\big )(\xi )\big | \le 8 \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} \cdot e^{-\frac{38\pi }{81} \xi ^2}, \quad \xi \in {\mathbb R}^2. \end{aligned}$$
Proof
Since we can always consider \(\pi (-\tau )f\) instead of f we may assume w.l.o.g. that \(\tau =0\). First we write f in the Hermite basis \(f=\sum _{k\ge 0}\langle f,h_k\rangle h_k\) so that
$$\begin{aligned} {\mathcal {S}}f(z)= & \left| \mathcal {B}f(z)e^{-\frac{\pi }{2}|z|^2}\right| ^2=\left| \sum _{k\ge 0}\langle f,h_k\rangle \mathcal {B}h_k e^{-\frac{\pi }{2}|z|^2}\right| ^2\\= & \left| \sum _{k\ge 0}\langle f,h_k\rangle \sqrt{\frac{\pi ^k}{k!}}z^ke^{-\frac{\pi }{2}|z|^2}\right| ^2 \end{aligned}$$
with (4.5). With \(q=\frac{2\sqrt{2}}{3}<1\), let us denote \( \gamma = \sum _{k\ge 0} q^k \langle f,h_k\rangle h_k \) and rewrite
$$\begin{aligned} {\mathcal {S}}f(z) e^{-\frac{\pi }{8} |z|^2}&= \left| \sum _{k=0}^\infty \langle f,h_k\rangle \sqrt{\frac{\pi ^k}{k!}} z^k e^{-\frac{9\pi }{16} |z|^2} \right| ^2\nonumber \\&= \left| \sum _{k=0}^\infty \langle f,h_k\rangle \sqrt{\frac{\pi ^k}{k!}} z^k e^{-\frac{\pi }{2} |z/q|^2} \right| ^2\nonumber \\&= \left| \sum _{k=0}^\infty q^k \langle f,h_k\rangle \sqrt{\frac{\pi ^k}{k!}} (z/q)^k e^{-\frac{\pi }{2} |z/q|^2} \right| ^2 = {\mathcal {S}}\gamma (z/q). \end{aligned}$$
(4.6)
Thus, we get with \(\xi '=(-\xi _2,\xi _1)\) that the Fourier transform of the product is equal to
$$\begin{aligned} {\mathcal {F}}\big ( {\mathcal {S}}f\cdot e^{-\frac{\pi }{8} |\cdot |^2}\big ) (\xi ) = q^2 \widehat{{\mathcal {S}}\gamma }(q\xi ) = q^2 \cdot {\mathcal {V}}_\gamma \gamma (q\xi ') \cdot {\mathcal {V}}_\varphi \varphi (q\xi '), \end{aligned}$$
where we made use of (4.3) to derive the last equality. Since \({\mathcal {V}}_\varphi \varphi \) is a (complex) Gaussian, see (4.1), it follows that
$$\begin{aligned} |{\mathcal {F}}\big ( {\mathcal {S}}f\cdot e^{-\frac{\pi }{8} |\cdot |^2}\big ) (\xi )| \le |{\mathcal {V}}_\gamma \gamma (q\xi ')| \cdot \frac{8}{9} e^{-\frac{4\pi }{9} |\xi |^2}. \end{aligned}$$
(4.7)
It remains to establish a pointwise bound for \({\mathcal {V}}_\gamma \gamma \). Note that it follows from (4.6) (replace z by qz and take roots on both sides) that
$$ |{\mathcal {G}}\gamma (z)| = |{\mathcal {G}}f(qz)| e^{-\frac{\pi }{18}|z|^2}. $$
Since the Gabor transform is unitary we can use this to estimate for \(p\in {\mathbb R}^2\)
$$\begin{aligned} & |{\mathcal {V}}_\gamma \gamma (p)| = |\langle \gamma , \pi (p)\gamma \rangle | = |\langle {\mathcal {G}}\gamma , {\mathcal {G}}[\pi (p)\gamma ]\rangle _{L^2({\mathbb R}^2)}| \\ & \quad \le \int _{{\mathbb R}^2} |{\mathcal {G}}\gamma (z)| \cdot |{\mathcal {G}}\gamma (z-p)|\,\text{ d }z \le \Vert {\mathcal {S}}f\Vert _{L^\infty } \int _{{\mathbb R}^2} e^{-\frac{\pi }{18} (|z|^2+|z-p|^2)}\,\text{ d }z. \end{aligned}$$
Since the integral expression
$$ \rho (p):= \int _{{\mathbb R}^2} e^{-\frac{\pi }{18} (|z|^2+|z-p|^2)}\,\text{ d }z $$
only depends on |p| we can assume that \(p=(|p|,0)^T\) and compute
$$ \rho (p) = \left( \int _{\mathbb R}e^{-\frac{\pi }{18}(x^2+(x-|p|)^2} \text{ d }x\right) \cdot \left( \int _{\mathbb R}e^{-\frac{\pi }{9} \omega ^2}\,\text{ d }\omega \right) = 9e^{-\frac{\pi }{36}|p|^2}. $$
In particular, we get that
$$ |{\mathcal {V}}_\gamma \gamma (q\xi ') | \le 9 \Vert {\mathcal {S}}f\Vert _{L^\infty } e^{-\frac{2\pi }{81}|\xi |^2}, $$
which together with (4.7) implies the claim.
We will need the following localization property of Gabor transforms:
Lemma 4.5
It holds for all \(r>0\), \(z\in {\mathbb R}^2\) and \(f\in L^2({\mathbb R})\) that
$$ |{\mathcal {G}}f(z)| \le \frac{\sqrt{e^{\pi r^2}-1}}{\pi r^2} \Vert {\mathcal {G}}f\Vert _{L^2(B_r(z))}. $$
Proof
We may assume w.l.o.g. that \(z=0\). Using the mean value property for the Bargmann transform we get that
$$\begin{aligned} {\mathcal {G}}f(0) = \mathcal {B}f(0)&= \frac{1}{\pi r^2} \int \limits _{\{|z|<r\}} \mathcal {B}f(z) \,\text{ d }z \\&= \frac{1}{\pi r^2} \iint \limits _{B_r(0)} {\mathcal {G}}f(x,-y) e^{\frac{\pi }{2} (x^2+\omega ^2)} e^{-\pi ixy} \,\text{ d }x\text{ d }y. \end{aligned}$$
Cauchy-Schwarz implies that
$$ |{\mathcal {G}}f(0)| \le \frac{1}{\pi r^2} \Vert {\mathcal {G}}f\Vert _{L^2(B_r(0))} \cdot \sqrt{e^{\pi r^2}-1}, $$
and we are done.

4.3 Gaussian Gabor Frames

This paragraph is concerned with discretization properties of the STFT. We refer to textbooks of Gröchenig [22] and Christensen [10] for a comprehensive account of this topic including the collection of statements below.
A family of vectors \((f_i)_{i\in I}\subseteq \mathcal {H}\) with \(\mathcal {H}\) a Hilbert space is called a frame if there exist constants \(0<A\le B\) such that
$$ A \Vert f\Vert _{\mathcal {H}}^2 \le \sum _{i\in I} |\langle f,f_i\rangle _{\mathcal {H}}|^2 \le B\Vert f\Vert _{\mathcal {H}}^2, \quad f\in \mathcal {H}. $$
The maximal A and the minimal B to satisfy the above condition are called frame constants. Given a frame \((f_i)_{i\in I}\) the frame operator defined by
$$ S: \mathcal {H}\rightarrow \mathcal {H}, \, Sf=\sum _{i\in I} \langle f,f_i\rangle _{\mathcal {H}} f_i $$
is well-defined, bounded, invertible, self-adjoint and positive. Moreover, the family \((S^{-1}f_i)_{i\in I}\) forms a frame with frame constants \(B^{-1}, A^{-1}\). The frame \((S^{-1}f_i)_{i\in I}\) is called the dual frame.
We consider function systems of the form \(\Phi =(\pi (\lambda )\varphi )_{\lambda \in \Lambda _{a,b}}\), with \(\varphi \) the Gaussian, \(\varphi (x)=2^{1/4}e^{-\pi x^2}\), \(\Lambda _{a,b}=a{\mathbb Z}\times b{\mathbb Z}\) a lattice, \(a,b>0\) discretization parameters. In this case, \(\Phi \) forms a frame for \(L^2({\mathbb R})\) if and only if \(ab<1\). By the definition of the Gabor transform the frame inequality can be written in the following way
$$ A\Vert f\Vert _{L^2({\mathbb R})}^2 \le \sum _{\lambda \in \Lambda } |{\mathcal {G}}f(\lambda )|^2 \le B\Vert f\Vert _{L^2({\mathbb R})}^2, \quad f\in L^2({\mathbb R}). $$
The canonical dual of the frame \(\Phi \) has Gabor structure as well and is given by \((\pi (\lambda )\gamma )_{\lambda \in \Lambda _{a,b}}\) with \(\gamma = S^{-1}\varphi \) and the reconstruction identity
$$\begin{aligned} f = \sum _{\lambda \in \Lambda } \langle f, \pi (\lambda ) \varphi \rangle _{L^2({\mathbb R})} \pi (\lambda ) \gamma = \sum _{\lambda \in \Lambda } {\mathcal {G}}f(\lambda ) \pi (\lambda ) \gamma , \quad f\in L^2({\mathbb R}), \end{aligned}$$
(4.8)
holds true, where the sum converges unconditionally in \(L^2({\mathbb R})\).
For the case of integer redundancy – which means that the parameters of the underlying lattice \(a{\mathbb Z}\times b{\mathbb Z}\) satisfy that \((ab)^{-1} \in \{2,3,4,\ldots \}\) – Janssen computed several Gabor frame related quantities explicitly [32, section 6]. In the sequel, we will only consider the case
$$ a=b=\mathfrak {a}=\dfrac{1}{\sqrt{2}},\qquad \Lambda =\Lambda _{\mathfrak {a}}=\dfrac{1}{\sqrt{2}}{\mathbb Z}^2 $$
which falls in this setting. In principle, other configurations could also be used in our approach. However, the above is somehow the canonical choice in the sense that \(\Lambda _\mathfrak {a}\) is the sparsest possible square lattice within the setting of Janssen’s results.
For the present situation, Janssens’s results imply the following. The lower and upper frame bound of \((\pi (\lambda )\varphi )_{\lambda \in \Lambda }\) are given by
$$\begin{aligned} A&= 2 \left( \sum _{k\in {\mathbb Z}}(-1)^k e^{-\pi k^2} \right) ^2 = 2 \vartheta _3(\frac{\pi }{2},e^{-\pi })^2 = 1.66\ldots \\ B&= 2 \left( \sum _{k\in {\mathbb Z}} e^{-\pi k^2} \right) ^2 = 2 \vartheta _3(0,e^{-\pi })^2 = 2.36\ldots \end{aligned}$$
respectively. The dual window is given by the formula
$$\begin{aligned} \psi (t) = \frac{1}{2\vartheta _3(\sqrt{2} \pi t, e^{-\pi })} \sum _{k\in {\mathbb Z}} c_k \varphi (t-\sqrt{2} k) \end{aligned}$$
(4.9)
with the coefficients given by
$$\begin{aligned} c_k = \frac{\sum _{m=0}^\infty (-1)^{k+m} e^{-\pi (m+\frac{1}{2})(2|k|+m+\frac{1}{2})}}{\sum _{n=-\infty }^\infty (-1)^n (n+\frac{1}{2})e^{-\pi (n+\frac{1}{2})^2}}. \end{aligned}$$
(4.10)
We will require several explicit bounds on the dual window function.
Lemma 4.6
The canonical dual \(\psi \) satisfies the bounds
$$ \Vert \psi \Vert _{L^2({\mathbb R})} \le 0.6 \quad \text {and}\quad |\psi (t)| \le e^{-\frac{\pi |t|}{\sqrt{2}}}, \, t\in {\mathbb R}$$
while \(\Vert {\mathcal {G}}\psi \Vert _{L^1({\mathbb R}^2)}\le 1.23\).
Remark 4.7
Recall that \(\Vert {\mathcal {G}}\psi \Vert _{L^1({\mathbb R}^2)}\) is the norm of \(\psi \) in the so-called Feichtinger algebra \(\mathcal {S}_0\) which is also the modulation space \(M_1\). The fact that \(\Vert {\mathcal {G}}\psi \Vert _{L^1({\mathbb R}^2)}<+\infty \) follows from [24, Theorem 1.2]. We here compute an explicit bound.
Proof
Since the operator norm of the inverse of the frame operator \(S^{-1}\) is bounded from above by the reciprocal of the lower frame bound, we get
$$ \Vert \psi \Vert _{L^2({\mathbb R})} = \Vert S^{-1}\varphi \Vert _{L^2({\mathbb R})}\le \frac{1}{A} \Vert \varphi \Vert _{L^2({\mathbb R})} = \frac{1}{A}<0.6. $$
For the pointwise bound we first estimate for \(x\in {\mathbb R}\)
$$\begin{aligned} & \vartheta _3(x,e^{-\pi }) = 1 + 2 \sum _{k=1}^\infty e^{-\pi k^2} \cos (2kx) \ge 1 - 2\sum _{k=1}^\infty e^{-\pi k^2}\nonumber \\ & \quad =2 - \sum _{k\in {\mathbb Z}} e^{-\pi k^2} = 2 - \vartheta _3(0,e^{-\pi }) = 0.913\ldots \end{aligned}$$
With this we get that the first factor in the formula for \(\psi \) is bounded by
$$ \frac{1}{2\vartheta _3\big (\sqrt{2} \pi t,e^{-\pi } \big )} < 0.55 $$
Next, we deal with the sum in the denominator of \(c_k\). Note that if \((a_n)_{n\in {\mathbb N}}\) is a non-increasing sequence of positive numbers, then \(\sum _{n=0}^\infty (-1)^n a_n \ge a_0- a_1\). Since \(n\mapsto (n+\frac{1}{2})e^{-\pi (n+\frac{1}{2})^2}\) is monotonically decreasing for \(n\ge 0\) we get a lower bound for the denominator in (4.10):
$$\begin{aligned} \kappa :=\sum \limits _{n=-\infty }^\infty (-1)^n (n+\frac{1}{2}) e^{-\pi (n+\frac{1}{2})^2}&= 2 \sum \limits _{n=0}^\infty (-1)^n (n+\frac{1}{2}) e^{-\pi (n+\frac{1}{2})^2}\\&\ge 2 \left( \frac{e^{-\frac{\pi }{4}}}{2} - \frac{3 e^{-\frac{9\pi }{4}}}{2} \right) >0.45. \end{aligned}$$
Let us denote the numerator of \(c_k\) by
$$ \gamma _k:= (-1)^k \sum _{m=0}^\infty (-1)^m e^{-\pi (m+\frac{1}{2})(2|k|+m+\frac{1}{2})}. $$
A similar argument as above implies that
$$\begin{aligned} 0< (-1)^k \gamma _k = \sum _{m=0}^\infty (-1)^m e^{-\pi (m+\frac{1}{2})(2|k|+m+\frac{1}{2})} \le e^{-\pi (|k|+\frac{1}{4})}. \end{aligned}$$
With this we get that
$$\begin{aligned} |\psi (t)| \le 0.55 \cdot \sum _{k\in {\mathbb Z}} |c_k| \varphi (t-\sqrt{2} k) \le \frac{0.55}{0.45} e^{-\frac{\pi }{4}} 2^{\frac{1}{4}} \cdot \sum _{k\in {\mathbb Z}} e^{-\pi |k|} e^{-\pi (t-\sqrt{2} k)^2}. \end{aligned}$$
The product of the constants in front of the sum is bounded above by 0.67. Clearly, the sum is even with respect to t. Therefore, we can replace t by |t| and further estimate
$$\begin{aligned} \sum _{k\in {\mathbb Z}} e^{-\pi |k|} e^{-\pi (t-\sqrt{2} k)^2}&= e^{-\frac{\pi |t|}{\sqrt{2}}} \cdot \sum _{k\in {\mathbb Z}} \exp \left\{ -\pi \left( |k|-\frac{|t|}{\sqrt{2}} + (|t|-\sqrt{2} k)^2 \right) \right\} \\&\le e^{-\frac{\pi |t|}{\sqrt{2}}} \cdot \sum _{k\in {\mathbb Z}} \exp \left\{ -\pi \left( k-\frac{|t|}{\sqrt{2}} + (|t|-\sqrt{2} k)^2 \right) \right\} \\&= e^{-\frac{\pi |t|}{\sqrt{2}}} \cdot \sum _{k\in {\mathbb Z}} e^{-\pi p\left( \frac{|t|}{\sqrt{2}}-k \right) }, \end{aligned}$$
where \(p(x) = 2x^2-x= 2 \left( x-\frac{1}{4} \right) ^2 -\frac{1}{8}\). The function \(\sum _{k\in {\mathbb Z}} e^{-\pi p(\cdot -k)}\) coincides up to a translation with
$$g(x)=e^{\frac{\pi }{8}}\sum _{k\in {\mathbb Z}} e^{-2\pi (k-x)^2}.$$
Using Poisson summation formula we get that
$$\begin{aligned} g(x) = \frac{e^{\frac{\pi }{8}}}{\sqrt{2}} \sum _{\ell \in {\mathbb Z}} e^{-2\pi i \ell x} e^{-\frac{\pi }{2} \ell ^2} \le \frac{e^{\frac{\pi }{8}}}{\sqrt{2}} \sum _{\ell \in {\mathbb Z}}e^{-\frac{\pi }{2} \ell ^2} = \frac{e^{\frac{\pi }{8}}}{\sqrt{2}} \vartheta _3(0, e^{-\frac{\pi }{2}}) = 1.48\dots \end{aligned}$$
Since \(1.49\cdot 0.67 <1\) we arrive at the desired bound.
It remains to prove the \(L^1\)-bound of \({\mathcal {G}}\psi \). First note that, as
$$|{\mathcal {G}}[\pi (\lambda )\varphi ]|=e^{-\frac{\pi }{2} |\cdot - \lambda |^2},$$
we get \(\Vert {\mathcal {G}}[\pi (\lambda )\varphi ]\Vert _{L^1({\mathbb R}^2)}=\Vert {\mathcal {G}}\varphi \Vert _{L^1({\mathbb R}^2)}=2\). It follows from [32, formulas (6.10) and (6.11)] that
$$\begin{aligned} \frac{1}{\vartheta _3(\sqrt{2}\pi t, e^{-\pi })} = \frac{1}{\kappa } \sum _{\ell =-\infty }^\infty \gamma _\ell e^{2 \sqrt{2} \pi i \ell t}. \end{aligned}$$
Thus, we have that
$$\begin{aligned} \psi (t) = \frac{1}{\vartheta _3(\sqrt{2} \pi t, e^{-\pi })} \sum _{k\in {\mathbb Z}} c_k \varphi (t-\sqrt{2} k) = \frac{1}{2\nu ^2} \sum _{\ell \in {\mathbb Z}} \sum _{k\in {\mathbb Z}} \gamma _\ell \gamma _k \,\pi \left( {\begin{array}{c}\sqrt{2} k\\ 2^{3/2}\ell \end{array}}\right) \varphi \end{aligned}$$
and applying \({\mathcal {G}}\) gives
$$\begin{aligned} {\mathcal {G}}\psi = \frac{1}{2\nu ^2} \sum _{\ell \in {\mathbb Z}} \sum _{k\in {\mathbb Z}} \gamma _\ell \gamma _k \, {\mathcal {G}}\left[ \pi \left( {\begin{array}{c}\sqrt{2} k\\ 2^{3/2}\ell \end{array}}\right) \varphi \right] . \end{aligned}$$
Making use of the above estimates we get that
$$\begin{aligned} \Vert {\mathcal {G}}\psi \Vert _{L^1({\mathbb R}^2)} \le \frac{1}{2\cdot 0.45^2} \sum _{\ell \in {\mathbb Z}} \sum _{k\in {\mathbb Z}} 2 e^{-\pi (|k|+|\ell |+\frac{1}{2})} = \frac{e^{-\frac{\pi }{2}}}{0.45^2} \cdot \left( 1 + \frac{2 e^{-\pi }}{1-e^{-\pi }} \right) ^2 \end{aligned}$$
which is \(<1.23\), and we are done.
Finally, we will also need two simple lemmas. The first one deals with Bessel-bounds for the systems \(\{\pi (\lambda )\psi \}_{\lambda \in \Lambda }\) and \(\{\pi (\lambda )\varphi \}_{\lambda \in \Lambda }\).
Lemma 4.8
For all \(c=(c_\lambda )_{\lambda \in \Lambda } \in \ell ^2(\Lambda )\) it holds that
$$ \left\| \sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda ) \varphi \right\| _{L^2({\mathbb R})} \le 4.25 \Vert c\Vert _{\ell ^2(\Lambda )} \ \text{ and }\ \left\| \sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )\psi \right\| _{L^2({\mathbb R})} \le 3.1 \Vert c\Vert _{\ell ^2(\Lambda )}. $$
Proof
As we will not use the first inequality, we will only give full detail for the second one. We define the quantity
$$ B':= 2\sqrt{2} \left( \sum _{k\in {\mathbb Z}} \Vert \psi \Vert _{L^\infty ([k\mathfrak {a},(k+1)\mathfrak {a}))} \right) ^2. $$
With the help of Lemma 4.6 we get that the expression inside the brackets is bounded by
$$\begin{aligned} \sum _{k=-\infty }^{-1} e^{-\frac{\pi }{\sqrt{2}} |(k+1)\mathfrak {a}|} + \sum _{k=0}^\infty e^{-\frac{\pi }{\sqrt{2}}|k\mathfrak {a}|} = 2 \sum _{k=0}^\infty e^{-\frac{\pi }{2} k} = \frac{2}{1-e^{-\frac{\pi }{2}}} = 2.52\dots , \end{aligned}$$
which implies that \(B'\le 18.04\). It follows from [10, Proposition 11.5.2] that \(B'\) is an upper frame bound for \((\pi (\lambda )\psi )_{\lambda \in \Lambda }\). Moreover, [10, Theorem 3.2.3] implies that
$$ \Vert \sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )\psi \Vert _{L^2({\mathbb R})} \le \sqrt{B'} \Vert c\Vert _{\ell ^2(\Lambda )}, $$
which finishes the proof of the second inequality.
To prove the first inequality, recall that the upper frame bound for the original frame \((\pi (\lambda )\varphi )_{\lambda \in \Lambda }\) is given by
$$ B= 2 \vartheta _3(0,e^{-\pi })^2 < 1.54^2, $$
so that we can proceed as above.
Finally, we also have:
Lemma 4.9
Let \(s>0\) and let \(g(x):= e^{-2\pi x^2}\). Then it holds that \((T_{s\ell }g)_{\ell \in {\mathbb Z}}\) forms a Bessel sequence with
$$ \sum _{\ell \in {\mathbb Z}} \big |\langle f,T_{s\ell }g\rangle \big |^2 \le \frac{\vartheta _3(0,e^{-\pi s^2})}{2} \Vert f\Vert _{L^2}^2, \quad f\in L^2({\mathbb R}). $$
Proof
By [10, Proposition 3.5.4], it suffices to show that for all \(k\in {\mathbb Z}\) it holds that
$$ \sum _{\ell \in {\mathbb Z}} |\langle T_{s\ell }g, T_{sk}g\rangle | \le \frac{\vartheta _3(0,e^{-\pi s^2})}{2}. $$
Since for all \(a,b\in {\mathbb R}\) we have that
$$ \langle T_a g, T_b g\rangle = \langle g, T_{b-a} g\rangle = \int _{\mathbb R}e^{-2\pi t^2} e^{-2\pi (t-b+a)^2}\,\text{ d }t = \frac{1}{2} e^{-\pi (b-a)^2}, $$
we indeed get that
$$\begin{aligned} \sum _{\ell \in {\mathbb Z}} |\langle T_{s\ell }g, T_{sk}g\rangle | =\frac{1}{2} \sum _{\ell \in {\mathbb Z}} e^{-\pi s^2(k-\ell )^2} = \frac{1}{2} \sum _{\ell \in {\mathbb Z}} e^{-\pi s^2 \ell ^2} =\frac{\vartheta _3(0,e^{-\pi s^2})}{2} \end{aligned}$$
as claimed.

4.4 Oversampling Formula

We denote the space of functions of bandwidth (at most) \(b>0\) by \(PW_b({\mathbb R}^d)\), that is,
$$ PW_b({\mathbb R}^d) = \Big \{f\in L^2({\mathbb R}^d): {\textrm{supp}}(\hat{f})\subseteq [-\frac{b}{2},\frac{b}{2}]^d \Big \}. $$
The cardinal sine is defined by
$$ {{\,\textrm{sinc}\,}}(x) ={\left\{ \begin{array}{ll} \frac{\sin (\pi x)}{\pi x}, & x\ne 0\\ 1, & x=0. \end{array}\right. } $$
For several variables the \({{\,\textrm{sinc}\,}}\) function is defined by tensorization, i.e.,
$$ {{\,\textrm{sinc}\,}}(x) = {{\,\textrm{sinc}\,}}(x_1)\cdot \ldots \cdot {{\,\textrm{sinc}\,}}(x_d),\quad x=(x_1,\ldots ,x_d)\in {\mathbb R}^d. $$
The famous sampling theorem attributed to Shannon, Whittaker and Kotelnikov asserts that a bandlimited function can be reconstructed given samples on the lattice \(\frac{1}{b}{\mathbb Z}^d\) by virtue of the formula
$$\begin{aligned} f(x) = \sum _{k\in {\mathbb Z}^d} f\left( \frac{k}{b}\right) {{\,\textrm{sinc}\,}}\left( b\left( x-\frac{k}{b}\right) \right) ,\quad f\in PW_b({\mathbb R}^d). \end{aligned}$$
The series converges rather slowly due to the slow decay of the \({{\,\textrm{sinc}\,}}\), which can be a disadvantage especially for numerical purposes. By oversampling the function one can get an improvement on the convergence.
Lemma 4.10
Let \(b>0\) and \(0<h<1/b\). Moreover, let \(\eta \in L^1({\mathbb R}^d)\) be a non-negative function such that
$$ {\textrm{supp}}(\eta ) \subseteq [-1,1]^d \quad \text {and}\quad \int _{{\mathbb R}^d}\eta (\xi )\,\text{ d }\xi = 1. $$
Then it holds for all \(f\in PW_b({\mathbb R}^d)\) that
$$ f(x) = \left( \frac{1+hb}{2}\right) ^d \cdot \sum _{k\in {\mathbb Z}^d} f(hk) \widehat{\eta }\left( q(hk-x)\right) {{\,\textrm{sinc}\,}}\left( 2c(hk-x) \right) $$
where \(c=\frac{1}{4}(\frac{1}{h}+b)\) and \(q=\frac{1}{4}(\frac{1}{h}-b)\).
We prove this lemma, both for the sake of completeness, and since some of its ingredients are useful in the proof of the next lemma.
Proof
Since \({\textrm{supp}}(\hat{f}) \subseteq [-\frac{b}{2},\frac{b}{2}]^d\subseteq [-\frac{1}{2\,h},\frac{1}{2\,h}]^d\) we have that
$$ \hat{f}(\xi ) = h^d \sum _{k\in {\mathbb Z}} f(hk) e^{-2\pi ihk\xi }. $$
Suppose that \(\psi \in C({\mathbb R}^d)\) is such that \(\psi (\xi )=1\) for \(\xi \in [-\frac{b}{2},\frac{b}{2}]^d\) and \(\psi (\xi )=0\) for \(\xi \notin [-\frac{1}{2\,h},\frac{1}{2\,h}]^d\), then it further holds that
$$\begin{aligned} \hat{f}(\xi ) = \hat{f}(\xi )\cdot \psi (\xi ) = h^d \sum _{k\in {\mathbb Z}^d} f(hk) e^{-2\pi ihk\xi }\psi (\xi ), \quad \xi \in {\mathbb R}^d \end{aligned}$$
(4.11)
where we used that \(\hat{f}\) is supported in \([-\frac{b}{2},\frac{b}{2}]^d\).
Next, we specify \(\psi \). With \(\eta _q=\frac{1}{q^d} \eta (x/q)\) we define
$$ \psi (\xi ):=(\eta _q *\chi _{[-c,c]^d})(\xi ) = \int _{[-c,c]^d} \eta _q(t-\xi )\,\text{ d }t, $$
and observe that
  • \(\psi \in C({\mathbb R}^d)\) and non-negative,
  • since \({\textrm{supp}}(\eta _q)\subseteq [-q,q]^d\) and since \(q+c\le \frac{1}{2h}\) we have that
    $$ {\textrm{supp}}(\psi ) \subseteq [-q,q]^d + [-c,c]^d = [-(q+c),q+c]^d \subseteq [-\frac{1}{2\,h},\frac{1}{2\,h}]^d; $$
    in particular, it holds that \(\psi (\xi )=0\) for \(\xi \notin [-\frac{1}{2\,h},\frac{1}{2\,h}]^d\).
  • since \(c = \frac{b}{2}+q\) it holds for all \(\xi \in [-\frac{b}{2},\frac{b}{2}]^d\) that
    $$[-c,c]^d+\xi \supseteq [-q,q]^d \supseteq {\textrm{supp}}(\eta _q), $$
    which implies that
    $$ \psi (\xi )= \int _{[-c,c]^d} \eta _q(t-\xi ) \,\text{ d }t =\int _{[-c,c]^d+\xi } \eta _q(t) \,\text{ d }t = \int _{{\mathbb R}^d}\eta _q(t)\,\text{ d }t = 1 $$
    for all \(\xi \in [-\frac{b}{2},\frac{b}{2}]^d\).
  • the Fourier transform of \(\psi \) is given by
    $$ \widehat{\psi }(x) = \widehat{\eta _q}(x) \widehat{\chi _{[-c,c]^d}}(x) = \widehat{\eta }(qx) \cdot (2c)^d{{\,\textrm{sinc}\,}}(2cx). $$
We have therefore verified the conditions from above, and applying the inverse Fourier transform to (4.11) gives that
$$\begin{aligned} f(x)= & h^d \sum _{k\in {\mathbb Z}^d} f(hk) \left( \int _{{\mathbb R}^d} \psi (\xi )e^{2\pi i(x-hk)\xi } \,\text{ d }\xi \right) = h^d \sum _{k\in {\mathbb Z}^d} f(hk) \widehat{\psi }(hk-x) \\= & (2c h)^d \cdot \sum _{k\in {\mathbb Z}^d} f(hk) \widehat{\eta }\left( q(hk-x)\right) {{\,\textrm{sinc}\,}}\left( 2c(hk-x) \right) . \end{aligned}$$
The statement follows now from \(2ch=\frac{1}{2}(1+hb)\).
We will also need an approximate sampling result for almost bandlimited functions.
Lemma 4.11
Let \(s>0\), let \(F\in L^2({\mathbb R}^2)\) and suppose that there exist \(K,C>0\) such that \( |\widehat{F}(\xi )| \le K e^{-C|\xi |^2}\), \(\xi \in {\mathbb R}^2\). Then it holds that
$$ \Vert F\Vert _{L^\infty ({\mathbb R}^2)} \le 73 \cdot \Vert F\Vert _{\ell ^\infty (s{\mathbb Z}^2)} + 233 \cdot \frac{K}{C} e^{-\frac{C}{16\,s^2}}. $$
Proof
With \(b=\frac{1}{2s}\), we introduce the function
$$ F_b:= {\mathcal {F}}^{-1}(\widehat{F}\cdot \chi _{[-\frac{b}{2},\frac{b}{2}]^2}), $$
which is the projection of F onto \(PW_b({\mathbb R}^2)\). Furthermore, let \(h=\frac{1}{2b}=s\) and \( \eta =\chi _{[-\frac{1}{2},\frac{1}{2}]^2} *\chi _{[-\frac{1}{2},\frac{1}{2}]^2}. \) Notice that \(\eta \in C({\mathbb R}^2)\) is non-negative and \(\displaystyle \int _{{\mathbb R}^2} \eta (x)\text{ d }x=1\). We apply Lemma 4.10 to \(F_b\): By taking into account that
$$ c=\frac{1}{4}\left( \frac{1}{h}+b\right) = \frac{3}{4} b \quad \text {and}\quad q = \frac{1}{4}\left( \frac{1}{h}-b\right) = \frac{1}{4} b $$
it follows – since \(1+hb=\frac{3}{2}\) – that
$$ F_b(x) = \frac{9}{16} \sum _{k\in {\mathbb Z}^2} F_b \left( \frac{k}{2b} \right) \cdot \widehat{\eta }\left( \frac{k}{8}-\frac{bx}{4} \right) \cdot {{\,\textrm{sinc}\,}}\left( \frac{3}{4} k-\frac{3bx}{2} \right) . $$
Since \(\widehat{\eta }(\xi ) ={{\,\textrm{sinc}\,}}^2(\xi )\) and \(s=\frac{1}{2b}\) we can further rewrite the above identity as
$$\begin{aligned} F_b(x)&= \frac{9}{16} \sum _{k\in {\mathbb Z}^2} F_b \left( sk \right) \cdot {{\,\textrm{sinc}\,}}^2 \left( \frac{x-sk}{4s}\right) \cdot {{\,\textrm{sinc}\,}}\left( \frac{3(x-sk)}{4s}\right) \\&= \sum _{k\in {\mathbb Z}^2} F_b(sk) \rho (x-sk), \end{aligned}$$
where \(\rho (x):=\frac{9}{16} \cdot {{\,\textrm{sinc}\,}}^2\left( \frac{x}{4\,s}\right) \cdot {{\,\textrm{sinc}\,}}\left( \frac{3x}{4\,s}\right) .\) We apply the triangle inequality and get
$$ \Vert F\Vert _{L^\infty } \le \Vert F_b\Vert _{L^\infty } + \Vert F-F_b\Vert _{L^\infty }. $$
We make use of the assumption on the Fourier decay and get that the approximation error is bounded by
$$\begin{aligned} \Vert F-F_b\Vert _{L^\infty }\le & \Vert {\mathcal {F}}(F-F_b)\Vert _{L^1} = \int _{{\mathbb R}^2\setminus [-\frac{b}{2},\frac{b}{2}]^2} |\widehat{F}(\xi )|\,\text{ d }\xi \\\le & K \int _{{\mathbb R}^2\setminus [-\frac{b}{2},\frac{b}{2}]^2} e^{-C|\xi |^2} \,\text{ d }\xi \le K \int _{\frac{b}{2}}^\infty e^{-C r^2} 2\pi r\,\text{ d }r = \frac{\pi K}{C} e^{-\frac{C b^2}{4}}. \end{aligned}$$
Thus, we can bound
$$\begin{aligned} \Vert F_b\Vert _{L^\infty }\le & \Vert \sum _{k\in {\mathbb Z}^2} F(sk)\rho (\cdot -sk)\Vert _{L^\infty } + \Vert \sum _{k\in {\mathbb Z}^2} [F(sk)-F_b(sk)] \rho (\cdot -sk)\Vert _{L^\infty }\\\le & \left( \Vert F\Vert _{\ell ^\infty (s{\mathbb Z}^2)} + \frac{\pi K}{C} e^{-\frac{C b^2}{4}} \right) \cdot \Vert \sum _{k\in {\mathbb Z}^2} T_{sk}|\rho |\Vert _{L^\infty }. \end{aligned}$$
It remains to bound the last term involving \(\rho \). To do this note that the univariate \({{\,\textrm{sinc}\,}}\) satisfies the pointwise bound \(|{{\,\textrm{sinc}\,}}(x)| \le \frac{1.2}{1+|x|}\). As a consequence,
$$\begin{aligned} |\rho (x)| \le \frac{9}{16} \cdot \frac{1.2^3}{\left( 1+\frac{|x_1|}{4s}\right) ^3} \cdot \frac{1.2^3}{\left( 1+\frac{|x_2|}{4s}\right) ^3} \le \frac{(4.4\cdot s)^6}{(4s+|x_1|)^3 (4s+|x_2|)^3}. \end{aligned}$$
This implies that
$$\begin{aligned} \sup _{x\in {\mathbb R}^2} \left( \sum _{k\in {\mathbb Z}^2} |\rho (x-sk)|\right) \le (4.4\cdot s)^6 \cdot \sup _{x\in {\mathbb R}} \left( \sum _{k\in {\mathbb Z}} \frac{1}{(4s+|x-sk|)^3}\right) ^2. \end{aligned}$$
By periodicity the function inside the brackets attains its maximum in the interval \(x\in [0,s]\). For such x we have that
$$\begin{aligned} \sum _{k\in {\mathbb Z}} \frac{1}{(4s+|x-sk|)^3}\le & \frac{1}{4^3s^3} +2 \sum _{k=1}^\infty \frac{1}{(3+k)^3 s^3}\\= & \frac{1}{s^3} \left( 4^{-3} + 2\sum _{k=4}^\infty k^{-3} \right) < 0.1 \cdot s^{-3}. \end{aligned}$$
With this we obtain that
$$ \left\| \sum _{k\in {\mathbb Z}^2} T_{sk}|\rho |\right\| _{L^\infty } \le 4.4^6 \cdot 0.01 <73. $$
Finally, collecting the estimates leads to
$$ \Vert F\Vert _{L^\infty } \le 73 \Vert F\Vert _{\ell ^\infty (s{\mathbb Z}^2)} + 74\pi \frac{K}{C}e^{-\frac{Cb^2}{4}}, $$
which implies the claim.

4.5 Controlling Holomorphic Functions in Terms of their Boundary Values

We begin with recalling a classical result from complex analysis.
Theorem 4.12
(Hadamard’s Three Line Theorem) Suppose that f is bounded on the strip \(\{x+iy:\, 0\le y\le b\}\) and holomorphic in the interior. Let \(m_f(t):= \sup _{x\in {\mathbb R}} |f(x+it)|\). Then it holds for all \(t\in [0,b]\) that
$$ m_f(t) \le m_f(0)^{1-\frac{t}{b}} \cdot m_f(b)^{\frac{t}{b}}. $$
Recall that for F holomorphic on \({\mathbb C}^d\) (or only on a tube \(\{|{\text {Im}}(y)|\le b\}\)) we have defined
$$ M_F(t)=\sup \big \{ |F(z)|: \, z\in {\mathbb C}^d, \, |{\textrm{Im} \,}(z)|=t\big \}. $$
When \(d=1\), i.e. F is a function of one complex variable, \(M_F(t)=\max \bigl (m_F(t),m_F(-t)\bigr )\) and Hadamard’s Three Line Theorem is easily seen to be also valid with \(M_F\) replacing \(m_F\). We will require a version of this fact for several complex variables for which we did not find a convenient reference:
Lemma 4.13
Suppose that F(z) is bounded on \(\{x+iy\in {\mathbb C}^d: |{\text {Im}}(y)|\le b\}\) and holomorphic in the interior. It holds for all \(t\in [0,b]\) that
$$ M_F(t) \le M_F(0)^{1-\frac{t}{b}} \, M_F(b)^{\frac{t}{b}}. $$
Remark 4.14
We are going to use this inequality in the following way: the function F we will consider belongs to some class of holomorphic functions for which an upper bound of \(M_F(t)\) is known. Then if \(M_F(0)\) is small, \(M_F(0)\le \varepsilon \), we obtain that \(M_F(t)\le M_F(b)^{\frac{t}{b}}\varepsilon ^{1-\frac{t}{b}}\) which shows propagation of smallness away from \({\mathbb R}^d\).
Proof
The statement is trivial if \(t\in \{0,b\}\) is one of the endpoints. Let us fix \(t\in (0,b)\) and pick an arbitrary \(\zeta \in {\mathbb C}^d\) with the property that \(|{\textrm{Im} \,}(\zeta )|=t\). It suffices to show that
$$ |F(\zeta )|\le M_F(0)^{1-\frac{t}{b}} \cdot M_F(b)^{\frac{t}{b}}. $$
Consider the function f defined by
$$ f(z)=F\left( \frac{{\textrm{Im} \,}(\zeta )}{t} z + {\textrm{Re} \,}(\zeta )\right) ,\quad z\in {\mathbb C}$$
which is bounded on the strip \(\{z:\, 0 \le {\textrm{Im} \,}(z) \le b\}\), holomorphic in the interior and satisfies \(f(it)=F(\zeta )\). Moreover, if \(z=x+i0\) is real-valued we have that \(|f(z)|\le M(0)\) and if \(z=x+ib\) lies on the top boundary of the strip then
$$ \left| {\textrm{Im} \,}\left( \frac{{\textrm{Im} \,}(\zeta )}{t} z + {\textrm{Re} \,}(\zeta )\right) \right| = b, $$
which implies that \(|f(x+ib)|\le M_F(b)\). Thus, applying Theorem 4.12 on f gives that
$$ |F(\zeta )|=|f(it)|\le M_F(0)^{1-\frac{t}{b}} \cdot M_F(b)^{\frac{t}{b}}, $$
as announced.

4.6 Vertex Weighted Graphs

As opposed to the standard notion of a weighted graph where the edges are furnished with weights, the case where vertices are weighted seems to be scarcely considered. One exception in the literature is the paper by Chung and Langlands [12]. We follow the introductory part of their paper to get familiar with the concept of vertex-weighted graphs.
We begin with an undirected graph \(G=(V,E)\) which we assume to be finite and have no self-loops. If vertices uv are neighbors (that is, if \((u,v)\in E\)) we will simply write \(u\sim v\). Let us further assume that to each vertex \(v\in V\) there is an associated weight denoted by \(\alpha _v\ge 0\). The Laplacian of the vertex-weighted graph \(G=(V,E,\alpha )\) will be denoted by \(\mathcal {L}^G\) and is defined by the matrix
$$\begin{aligned} \mathcal {L}^G(u,v) = {\left\{ \begin{array}{ll} \sum _{z\sim u} \alpha _z,\quad & u=v\\ -\sqrt{\alpha _u \alpha _v}, \quad & u\sim v\\ 0, \quad & \text {otherwise}. \end{array}\right. } \end{aligned}$$
To G one then associates an unweighted graph \(G'=(V,E')\) with \(E'\) defined by
$$ (u,v)\in E' \quad \text{ if } \text{ and } \text{ only } \text{ if } \quad (u,v)\in E \quad \text {and}\quad \alpha _u \alpha _v >0. $$
We will say that the vertex weighted-graph \(G=(V,E,\alpha )\) is connected if the unweighted graph \(G'=(V,E')\) is connected in the usual sense.
Lemma 4.15
Let \(G=(V,E,\alpha )\) be a vertex-weighted graph, let \(\mathcal {L}^G\) be its Laplacian, and let \(N=|V|\). Then it holds that \(\mathcal {L}^G\) is positive semi-definite with eigenvalues \(0=\lambda _1\le \lambda _2\le \ldots \le \lambda _N\). Furthermore, it holds that
$$\begin{aligned} \lambda _2 = \min _{\begin{array}{c} g\in {\mathbb C}^N\\ \sum _j g_j\alpha _j=0 \end{array}} \frac{\frac{1}{2} \sum \limits _{i=1}^N \sum \limits _{k\sim i} \alpha _i\alpha _k |g_i-g_k|^2}{\sum \limits _{i=1}^N \alpha _i |g_i|^2 }. \end{aligned}$$
In particular, G is connected if and only if \(\lambda _2>0\).
Proof
The proof is mainly a reproduction of arguments from [12]. We assume w.l.o.g. that \(V=\{1,\ldots ,N\}\). A function on V is thus a vector \(f\in {\mathbb C}^N\) that we endow with its usual \(\ell ^2\)-norm \(\Vert (f_1,\ldots ,f_N)\Vert ^2=\displaystyle \sum _{j=1}^N|f_j|^2\) and the associated scalar product. Further, define a matrix B on \(V\times E\) by
$$ B(j,e)= {\left\{ \begin{array}{ll} \sqrt{\alpha _i}, & e=(i,j), \,i\le j\\ -\sqrt{\alpha _i}, & e=(i,j), \, i> j\\ 0 & \text {otherwise.} \end{array}\right. } $$
We claim that \(BB^*= \mathcal {L}^G\). For entries in the diagonal we have that
$$ (BB^*)_{j,j} = \sum _{i\sim j, i\le j} \sqrt{\alpha _i}^2 + \sum _{i\sim j, i>j} (-\sqrt{\alpha _i})^2 =\mathcal {L}^G_{j,j}. $$
For off-diagonal entries \( (BB^*)_{j,k} = \sum _{e\in E} B(j,e)B(k,e) \) note that by definition
$$ e\in E: \, B(j,e)B(k,e)\ne 0 \quad \Rightarrow \quad e=(k,j). $$
As a consequence, when \(j\not =k\)
$$ (BB^*)_{j,k} = {\left\{ \begin{array}{ll} -\sqrt{\alpha _j \alpha _k}, & j\sim k\\ 0, & \text {otherwise.} \end{array}\right. } $$
and it follows that indeed \(BB^*=\mathcal {L}^G\), which implies that \(\mathcal {L}^G\succeq 0\). In particular, \(\mathcal {L}^G\) has an orthonormal basis of eigenvectors and the corresponding eigenvalues are all non-negative, \(0\le \lambda _1\le \lambda _2\le \ldots \le \lambda _N\).
Next consider the vector \(\psi =(\sqrt{\alpha _1},\ldots ,\sqrt{\alpha _N})^T\) and notice that, for \(k\in \{1,\ldots ,N\}\) we have that
$$\begin{aligned} (\mathcal {L}^G \psi )_k= \sqrt{\alpha _k} \left( \sum _{i\sim k} \alpha _i \right) - \sum _{i\sim k} \sqrt{\alpha _i \alpha _k} \cdot \sqrt{\alpha _i} = 0. \end{aligned}$$
So \(\psi \) is an eigenvector for the eigenvalue \(\lambda _1=0\).
To prove the last part, let us assume that all the weights \(\alpha _i\) are strictly positive for now. Let us denote \(W={{\,\textrm{diag}\,}}(\alpha _i)\) and, let L be defined by virtue of
$$ L_{i,j}= {\left\{ \begin{array}{ll} \sum _{k\sim i} \alpha _k, & j=i\\ -\alpha _j, & i\sim j\\ 0 & \text {otherwise.} \end{array}\right. } $$
For \(f\in {\mathbb C}^N\) and \(i\in \{1,\ldots ,N\}\) it holds that
$$ (Lf)_i = (\sum _{k\sim i} \alpha _k)f_i - \sum _{j\sim i} \alpha _j f_j = \sum _{k\sim i} \alpha _k(f_i-f_k). $$
Furthermore, it is easy to see that \(L=W^{-1/2}\mathcal {L}^G W^{1/2}\).
By expressing the spectral gap in terms of the Rayleigh coefficient, and by substituting \(g=W^{-1/2}f\) we get that
$$\begin{aligned} \lambda _2= \min \limits _{\begin{array}{c} f\in {\mathbb C}^N\\ \langle f,\psi \rangle =0 \end{array}} \frac{\langle f,\mathcal {L}^G f\rangle }{\langle f,f\rangle } = \min \limits _{\begin{array}{c} g\in {\mathbb C}^N\\ \sum _j g_j \alpha _j = 0 \end{array}} \frac{\langle Wg,Lg\rangle }{\langle Wg,g\rangle } \end{aligned}$$
We further rewrite the numerator
$$\begin{aligned} \langle Wg, Lg\rangle= & \sum _{i=1}^N \alpha _i g_i \cdot \left( \sum _{k\sim i} \alpha _k (\bar{g_i}-\bar{g_k}) \right) \\= & \sum _{i=1}^N \sum _{k\sim i}\alpha _i \alpha _k |g_i-g_k|^2 + \sum _{i=1}^N \sum _{k\sim i} \alpha _i\alpha _k g_k(\bar{g_i}-\bar{g_k})\\= & \frac{1}{2} \sum _{i=1}^N \sum _{k\sim i}\alpha _i \alpha _k |g_i - g_k|^2\\ & \quad + \frac{1}{2} \sum _{i=1}^N \sum _{k\sim i}\alpha _i \alpha _k \left( |g_i|^2 -2{\textrm{Re} \,}(\bar{g_i}g_k) + |g_k|^2 + 2g_k\bar{g_i} -2|g_k|^2 \right) \end{aligned}$$
Let us denote the double sum in the last line by S. Since we know that the whole expression is non-negative, and thus in particular real-valued we get that
$$ S=\frac{S+\overline{S}}{2}=\frac{1}{2} \sum _{i=1}^N \sum _{k\sim i} \alpha _i\alpha _k (|g_i|^2-|g_k|^2). $$
Since each edge \(e=(i,k)\) appears precisely twice, and with opposite orientation in the sum, it follows that \(S=0\). With this we get that indeed
$$\begin{aligned} \lambda _2= & \min _{\begin{array}{c} g\in {\mathbb C}^N\\ \sum _j g_j\alpha _j=0 \end{array}} \frac{\frac{1}{2} \sum \limits _{j=1}^N \sum \limits _{k\sim j} \alpha _j\alpha _k |g_j-g_k|^2}{\sum \limits _{j=1}^N \alpha _j |g_j|^2 }\\= & \min \left\{ \frac{1}{2} \sum \limits _{j=1}^N \sum \limits _{k\sim j} \alpha _j\alpha _k |g_j-g_k|^2:\right. \\ & \qquad \qquad \left. g\in {\mathbb C}^N \text{ with } \sum \limits _{j=1}^N \alpha _j |g_j|^2=1 \text{ and } \sum \limits _{i=1}^N g_j\alpha _j=0\right\} \end{aligned}$$
The general case, where some of the weights are allowed to vanish then follows from a continuity argument.
Let G be connected so that \(\alpha _j\alpha _k>0\) when \(j\sim k\) and assume towards a contradiction that \(\lambda _2=0\). Then it follows that a minimizer g of the above problem satisfies
$$ \sum \limits _{k\sim j}\alpha _k |g_j-g_k|^2=0,\quad j\in \{1,\ldots ,N\} $$
and must thus be a multiple of \((1,\ldots ,1)^T\). This contradicts the constraint \(\sum _j g_j\alpha _j=0\). Hence connectedness of G implies \(\lambda _2>0\).
On the other hand, if G is disconnected, then there exists a non-trivial component \(\emptyset \subsetneq A \subsetneq V\) and one can construct an admissible g by setting \(g_i=1\) for \(i\in A\) and \(g_j=-b\) for \(j\in V\setminus A\) where b satisfies
$$ \sum _{i\in A} \alpha _i = b \sum _{i\in V{\setminus } A} \alpha _i, $$
which achieves that the ratio vanishes and therefore, that \(\lambda _2=0\).

4.7 Rank One Matrix Completion

Matrix completion is concerned with determining a matrix X given only some of its entries. Clearly, this requires some prior knowledge on the structure of the matrix to be recovered. We consider here the case where \(X\succeq 0\) is positive semi-definite and has rank (at most) one. That is, X is of the form
$$ X = \textbf{x}\otimes \bar{\textbf{x}} = \textbf{x}\textbf{x}^H, $$
with \(\textbf{x}\in {\mathbb C}^d\) the generating vector. Note that \(X(i,i)=|x_i|^2\) so we assume that we know the diagonal of X. Let \(E\subsetneq \{1,\ldots ,d\}^2\setminus D\) (\(D=\{(1,1),\ldots ,(d,d)\}\)) denote the index set of the off-diagonal information on the matrix that is available. The problem we are interested in is the following:
$$\begin{aligned} \hbox {Given }\{X_{k,\ell }, (k,\ell )\in E\cup D\},\hbox { find }X\hbox { (or } \textbf{x}\hbox { up to a phase factor).} \end{aligned}$$
The purpose of this section is to present (a slight modification of) a result due to Demanet and Jugnon [14], which relates the matrix recovery problem to a vertex weighted graph associated with the restriction of X to \(D\cup E\), and reveals that any positive semidefinite matrix \(Y\succeq 0\) with
$$ Y_{k,\ell } \approx X_{k,\ell }, \quad (k,\ell )\in E $$
is already a good approximation for X provided that the respective graph has good connectivity as quantified by its spectral gap.
Theorem 4.16
Let \(V=\{1,\ldots ,d\}\), \(D=\{(1,1),\ldots ,(d,d)\}\) and \(E\subset (V\times V){\setminus } D\) so that (VE) is an undirected graph with no self loops.
For \(x\in {\mathbb C}^d\setminus \{0\}\), let \(\alpha = (|x_i|^2)_{i=1}^d\), let \((V,E,\alpha )\) be the resulting vertex weighted graph and let \(\lambda _2\) denote the spectral gap of \((V,E,\alpha )\), which we assume to be \(>0\).
Let \(B=\sum _{(k,\ell )\in V^2} |\mathcal {L}^G_{k,\ell }|\), and let \(\varepsilon >0\) such that
$$\begin{aligned} \varepsilon \le \min \left\{ \frac{|x|^2 }{d^2 }, \frac{|x|^2\lambda _2}{12 B}\right\} . \end{aligned}$$
(4.12)
Further, let \(Y\in {\mathbb C}^{d\times d}\), \(Y\succeq 0\) that satisfies
$$\begin{aligned} |Y_{k,\ell } - x_k\overline{x_\ell }| \le \varepsilon , \quad (k,\ell )\in D\cup E. \end{aligned}$$
(4.13)
Then it holds that
$$ \Vert Y-x\otimes \overline{x}\Vert _F \le \sqrt{\frac{3|x|^2 B}{\lambda _2}} \cdot \sqrt{\varepsilon }. $$
Furthermore, Y has a unique top eigenpair \((\eta ,v)\) and
$$ \min _{\theta \in {\mathbb R}} |\sqrt{{{\,\textrm{tr}\,}}(Y)}v - e^{i\theta }x| \le \left( 1+2\sqrt{6} \sqrt{\frac{B}{\lambda _2}}\right) \cdot \sqrt{\varepsilon }. $$
Note that the hypothesis of this theorem do not require to know x, only to know \(x_k\overline{x_\ell }\) for \((k,\ell )\in D\cup E\). The theorem states that if we find a matrix \(Y\in {\mathbb C}^{d\times d}\), \(Y\succeq 0\) whose \((k,\ell )\) entries with \((k,\ell )\in D\cup E\) approximate well the data, then the top eigenvector provides a good approximation of x, up to a rescaling factor.
The result as well as its proof is a variation of Theorem 4 in [14]. The modification consists in how we measure certain error terms. While we use \(\ell ^\infty \)-norm in assumption (4.13), the original statement employs \(\ell ^1\)-norm at this point.
Proof
Throughout we denote \(X=xx^*\). We introduce a matrix \(L\in {\mathbb C}^{d\times d}\) by
$$ L_{k,\ell } = {\left\{ \begin{array}{ll} \sum _{j\sim k} |x_j|^2, \quad & (k,\ell )\in D\\ - x_k\overline{x_\ell }, \quad & (k,\ell )\in E\\ 0 \quad & \text {otherwise.} \end{array}\right. } $$
Let \(\phi _k\in {\mathbb R}\) be such that \(x_k = |x_k|e^{i\phi _k}\), \(k\in V\) and let \(\Delta = {{\,\textrm{diag}\,}}(e^{i\phi _1},\ldots ,e^{i\phi _d}).\) Note that \(\Delta \mathcal {L}^G \Delta ^* = L\), that \(L\succeq 0\) with same eigenvalues as the Laplacian \(\mathcal {L}^G\), which we denote by \(0=\lambda _1<\lambda _2\le \ldots \le \lambda _d\). Let \(v_1,\ldots ,v_d\) be the corresponding orthonormal basis of eigenvectors of L, so that
$$ L = \sum _{k=1}^d \lambda _k v_k \otimes \overline{v_k}. $$
We define for \(k=1,\ldots ,d\) numbers \( c_k:= \langle Y, v_k\otimes \overline{v_k}\rangle _F \ge 0 \) and notice that, \(\langle Y,L\rangle _F = \sum _{k=1}^d \lambda _k c_k\), and as \((v_k)_{k=1}^n\) is an o.n.b., that
$$ \sum _{k=1}^d c_k = \langle Y, \sum _{k=1}^d v_k\otimes \overline{v_k}\rangle _F = \langle Y,I \rangle _F = {{\,\textrm{tr}\,}}(Y). $$
For all k we have that
$$ (L x)_k = \left( \sum _{j\sim k} |x_j|^2\right) x_k - \sum _{\ell \sim k} x_k \overline{x_\ell }x_\ell = 0, $$
i.e. \(Lx =0\). Since by assumption \(\lambda _2>0\) we conclude that \(x=e^{i\theta }|x| v_1\) for suitable \(\theta \in {\mathbb R}\).
The idea of the proof is now to show that \((c_k)_{k=1}^d\) is ’front-heavy’ in the sense that \(c_1\approx {{\,\textrm{tr}\,}}(Y)\) and \(c_2,\ldots ,c_d\approx 0\), and deduce from this that \(Y\approx X\).
On the one hand we have with (4.13) that
$$\begin{aligned} \begin{aligned} \sum _{k=1}^d \lambda _k c_k&= \langle Y,L\rangle _F =\langle Y-X, L\rangle _F \le \sum _{(k,\ell )\in V^2} |Y_{k,\ell }-x_k \overline{x_\ell }| \cdot |L_{k,\ell }| \\&= \sum _{(k,\ell )\in D\cup E} |Y_{k,\ell }-x_k \overline{x_\ell }| \cdot |L_{k,\ell }| \le \varepsilon \sum _{(k,\ell )\in V^2} |L_{k,\ell }|\\&= \varepsilon \sum _{(k,\ell )\in V^2} |\mathcal {L}^G_{k,\ell }| = \varepsilon B. \end{aligned} \end{aligned}$$
Since \(\lambda _1=0\) and \(0<\lambda _2\le \lambda _3\le \ldots \) this implies that
$$ \sum _{k=2}^d c_k \le \frac{\sum _{k=2}^d \lambda _k c_k}{\lambda _2} \le \frac{B\varepsilon }{\lambda _2}. $$
Therefore, we have that
$$\begin{aligned} c_1 \ge {{\,\textrm{tr}\,}}(Y)- \frac{B\varepsilon }{\lambda _2}. \end{aligned}$$
Recall that, since \(Y\succeq 0\), \(\langle Y,Y\rangle _F = {{\,\textrm{tr}\,}}(Y^2)\le {{\,\textrm{tr}\,}}(Y)^2\). With this,
$$\begin{aligned} & \Vert Y - X \Vert _F^2 = \Vert Y - |x|^2 v_1\otimes \overline{v_1}\Vert _F^2 = \langle Y,Y\rangle _F - 2|x|^2 c_1 + |x|^4 \\ & \quad \le {{\,\textrm{tr}\,}}(Y)^2 - 2 {{\,\textrm{tr}\,}}(X)c_1 + {{\,\textrm{tr}\,}}(X)^2 \le ({{\,\textrm{tr}\,}}(Y)- {{\,\textrm{tr}\,}}(X))^2 + \frac{2|x|^2 B}{\lambda _2}\varepsilon . \end{aligned}$$
It follows from (4.13) that \({{\,\textrm{tr}\,}}(Y-X)^2\le d^2\varepsilon ^2\). Hence, we further have that
$$ \Vert Y-X\Vert _F^2 \le \left( d^2\varepsilon + \frac{2 |x|^2 B}{\lambda _2} \right) \varepsilon \le \frac{3 |x|^2 B}{\lambda _2}, $$
where the final inequality follows from (4.12).
To prove the second statement we will make use of [14, Lemma 2], which implies that the top eigenpair is unique and that
$$\begin{aligned} \min _{\theta \in {\mathbb R}} \big | |x|v- e^{i\theta } x\big | \le \frac{2\sqrt{2}}{|x|} \Vert Y-xx^*\Vert _2 \end{aligned}$$
(4.14)
provided that \(\Vert Y-xx^*\Vert _2 < \frac{|x|^2}{2}\). By part one and assumption (4.12), we have that
$$ \Vert Y-xx^*\Vert _2^2 \le \Vert Y-xx^*\Vert _F^2 \le \frac{3 |x|^2 B}{\lambda _2} \varepsilon \le \frac{|x|^4}{4}, $$
and therefore, that (4.14) holds. In particular, there exists \(\theta \in {\mathbb R}\) such that
$$ \big | |x|v- e^{i\theta } x\big | \le 2 \sqrt{\frac{6B\varepsilon }{\lambda _2}}. $$
With this we get that
$$\begin{aligned} \left| \sqrt{{{\,\textrm{tr}\,}}(Y)}v - e^{i\theta }x\right|&\le \left| \sqrt{{{\,\textrm{tr}\,}}(Y)} v- |x| v\right| + \left| |x|v-e^{i\theta }x\right| \\&\le \left| \sqrt{{{\,\textrm{tr}\,}}(Y)} - |x|\right| + 2 \sqrt{\frac{6B\varepsilon }{\lambda _2}} \end{aligned}$$
Since the first term on the right hand side can be bounded by
$$\begin{aligned} \left| \sqrt{{{\,\textrm{tr}\,}}(Y)} - |x|\right| = \frac{\left| {{\,\textrm{tr}\,}}(Y)-|x|^2\right| }{\sqrt{{{\,\textrm{tr}\,}}(Y)} + |x|} \le \frac{\left| {{\,\textrm{tr}\,}}(Y-X)\right| }{|x|} \le \frac{d\varepsilon }{|x|}, \end{aligned}$$
we further get – by making use of (4.12) – that
$$ \left| \sqrt{{{\,\textrm{tr}\,}}(Y)}v - e^{i\theta }x\right| \le \frac{d\varepsilon }{|x|} + 2 \sqrt{\frac{6B\varepsilon }{\lambda _2}} \le \left( 1+ 2\sqrt{6} \sqrt{\frac{B}{\lambda _2}} \right) \sqrt{\varepsilon }, $$
and we are done.
Finally, we translate the above result into the context of our de-lifting task.
Corollary 4.17
Let \(f\in L^2({\mathbb R})\), let \(\Lambda \subseteq \mathfrak {a}{\mathbb Z}^2\) finite and let \(r>0\). Let \(\mathcal {L}\) denote the Laplacian of the signal associated graph, let \(B=\sum _{(u,v)\in \Lambda ^2} |\mathcal {L}_{u,v}|\), and let \(D=\{(v,v),\,v\in \Lambda \}\). Further, suppose the following two assumptions:
i)
\(\varepsilon >0\) is such that
$$ \varepsilon \le \min \left\{ \frac{1}{|\Lambda |^2}, \frac{\lambda _2(f,\Lambda ,r)}{12B} \right\} \cdot \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2 $$
 
ii)
\(Y\in {\mathbb C}^{\Lambda \times \Lambda }\), \(Y\succeq 0\) satisfies
$$ |Y_{u,v} - {\mathcal {G}}f(u)\overline{{\mathcal {G}}f(v)}| \le \varepsilon , \quad (u,v)\in D\cup E. $$
 
Then, Y has a unique top eigenvector \(v\in {\mathbb C}^\Lambda \) and it holds that
$$ \min _{\theta \in {\mathbb R}} \sum _{\lambda \in \Lambda } \big | \sqrt{{{\,\textrm{tr}\,}}(Y)} v_\lambda - e^{i\theta }{\mathcal {G}}f(\lambda )\big |^2 \le \left( 1+2\sqrt{6} \sqrt{\frac{B}{\lambda _2(f,\Lambda ,r)}}\right) ^2 \varepsilon . $$

5 An Error Estimate for the Reconstruction from Incomplete and Noisy Gabor Coefficients

5.1 The Error Estimate

Recall that given \(T,S>0\) we denote \(\Lambda =([-T,T]\times [-S,S])\cap \mathfrak {a}{\mathbb Z}^2\), and that \(\mathcal {R}_\Lambda :{\mathbb C}^\Lambda \rightarrow L^2({\mathbb R})\) is defined by
$$ \mathcal {R}_\Lambda (c) = \sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )\psi , $$
where \(\psi \) is the canonical dual of \((\pi (\lambda )\varphi )_{\lambda \in \mathfrak {a}{\mathbb Z}^2}\).
In Sect. 5.2 we establish the following estimate.
Proposition 5.1
Let \(T,S\in \mathfrak {a}{\mathbb N}\), let \(0<\tau <T\) and let
$$ \Lambda ':= ([-T,T]\times [-S,S]^c) \cap \mathfrak {a}{\mathbb Z}^2. $$
For all \(f\in L^2({\mathbb R})\) and for all \(c\in {\mathbb C}^\Lambda \) it holds that
$$\begin{aligned} & \Vert f-\mathcal {R}_\Lambda (c)\Vert _{L^2(-\tau ,\tau )} \le 3.1 \Bigg ( \Vert {\mathcal {G}}f- c\Vert _{\ell ^2(\Lambda )} + \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda ')} \nonumber \\ & \quad +\sqrt{\tau +1} \cdot e^{-\frac{\pi }{\sqrt{2}} (T-\tau )}\cdot \sup _{x\in {\mathbb R}}\Vert f \cdot T_x \varphi ^{\frac{1}{2}}\Vert _{L^2({\mathbb R})} \Bigg ). \end{aligned}$$
As a consequence we obtain Theorem 2.11.
Proof of Theorem 2.11
We just need to show that
$$\begin{aligned} \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda ')} \le 2.2 \sqrt{T+1} \cdot \kappa _S(f), \end{aligned}$$
(5.1)
which together with Proposition 5.1 implies the result. Let \(K,L\in {\mathbb N}\) be given by \(T=\mathfrak {a}K\) and \(S=\mathfrak {a}L\). Note that by applying Lemma 4.5 with \(r=\frac{\mathfrak {a}}{2}\) we get that, for all \(\lambda \in \Lambda '\),
$$ |{\mathcal {G}}f(\lambda )| \le 1.79 \Vert {\mathcal {G}}f\Vert _{L^2(B_r(\lambda ))}. $$
As a consequence,
$$\begin{aligned} \begin{aligned} \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda ')}^2&= \sum _{k=-K}^K \sum _{\begin{array}{c} \ell \in {\mathbb Z}\\ |\ell |\ge L+1 \end{array}} |{\mathcal {G}}f(\mathfrak {a}k,\mathfrak {a}\ell )|^2 \\&\le 1.79^2 \sum _{k=-K}^K \sum _{\begin{array}{c} \ell \in {\mathbb Z}\\ |\ell |\ge L+1 \end{array}} \Vert {\mathcal {G}}f\Vert _{L^2(B_r(\mathfrak {a}k,\mathfrak {a}\ell ))}^2. \end{aligned} \end{aligned}$$
For fixed k, for every \(|\ell |\ge L+1\) we have that
$$ B_r(\mathfrak {a}k,\mathfrak {a}\ell ) \subseteq [k\mathfrak {a}-r,k\mathfrak {a}+r]\times [-S,S]^c $$
while for \(\ell '\ne \ell \)
$$ B_r(\mathfrak {a}k,\mathfrak {a}\ell )\cap B_r(\mathfrak {a}k,\mathfrak {a}\ell ')=\emptyset . $$
Therefore, for all k the inner sum is bounded according to
$$ \sum _{\begin{array}{c} \ell \in {\mathbb Z}\\ |\ell |\ge L+1 \end{array}} \Vert {\mathcal {G}}f\Vert _{L^2(B_r(\mathfrak {a}k,\mathfrak {a}\ell ))}^2 \le 2r \kappa _S(f)^2 = \mathfrak {a} \cdot \kappa _S(f)^2, $$
and we get that
$$ \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda ')} \le 1.79\cdot \sqrt{2K+1} \cdot \sqrt{\mathfrak {a}} \kappa _S(f)b $$
which implies (5.1).
We now turn to the proof of Theorem 5.1.

5.2 Proof of Proposition 5.1

We will denote \(I=(-\tau ,\tau )\). Recalling the reconstruction formula involving the dual window \(\psi \), we decompose
$$\begin{aligned} f&= \sum _{\lambda \in \Lambda } {\mathcal {G}}f(\lambda )\cdot \pi (\lambda )\psi \\&= \sum _{\lambda \in \Lambda } {\mathcal {G}}f(\lambda )\cdot \pi (\lambda )\psi + \sum _{\lambda \in \Lambda '} {\mathcal {G}}f(\lambda )\cdot \pi (\lambda )\psi + \sum _{\begin{array}{c} k\in {\mathbb Z}\\ |k|> \frac{T}{\mathfrak {a}} \end{array}} g_k, \end{aligned}$$
where \(g_k:= \sum _{\ell \in {\mathbb Z}} {\mathcal {G}}f(\mathfrak {a}k,\mathfrak {a}\ell ) M_{\mathfrak {a}\ell }T_{\mathfrak {a}k}\psi \). Notice that each of the series converges unconditionally since it is a sub-series of an unconditionally convergent one. With this, we can write
$$ f-\mathcal {R}_\Lambda (c) = \mathcal {R}_\Lambda (({\mathcal {G}}f(\lambda )-c_\lambda )_{\lambda \in \Lambda }) + \sum _{\lambda \in \Lambda '} {\mathcal {G}}f(\lambda )\cdot \pi (\lambda )\psi + \sum _{\begin{array}{c} k\in {\mathbb Z}\\ |k|\ge \sqrt{2} T+1 \end{array}} g_k, $$
which implies that
$$\begin{aligned} & \Vert f-\mathcal {R}_\Lambda (c)\Vert _{L^2(I)} \le \big \Vert \mathcal {R}_\Lambda \big (({\mathcal {G}}f(\lambda )-c_\lambda )_{\lambda \in \Lambda }\big )\big \Vert _{L^2(I)}\\ & \quad + \left\| \sum _{\lambda \in \Lambda '}{\mathcal {G}}f(\lambda )\cdot \pi (\lambda )\psi \right\| _{L^2(I)} + \sum _{\begin{array}{c} k\in {\mathbb Z}\\ |k|\ge \sqrt{2} T+1 \end{array}} \Vert g_k\Vert _{L^2(I)}. \end{aligned}$$
We estimate each term on the right hand side separately.
It follows from Lemma 4.8 that the first term is bounded by
$$\begin{aligned} \big \Vert \mathcal {R}_\Lambda \big (({\mathcal {G}}f(\lambda )-c_\lambda )_{\lambda \in \Lambda }\big )\big \Vert _{L^2(I)}&=\left\| \sum _{\lambda \in \Lambda }({\mathcal {G}}f(\lambda )-c_\lambda ) \pi (\lambda )\psi \right\| _{L^2({\mathbb R})}\nonumber \\&\le 3.1 \left( \sum _{\lambda \in \Lambda } |{\mathcal {G}}f(\lambda )-c_\lambda |^2 \right) ^\frac{1}{2}. \end{aligned}$$
(5.2)
For the second term, Lemma 4.8 directly gives
$$\begin{aligned} \left\| \sum _{\lambda \in \Lambda '}{\mathcal {G}}f(\lambda )\cdot \pi (\lambda )\psi \right\| _{L^2(I)} \le 3.1 \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda ')}. \end{aligned}$$
(5.3)
Let \(k\in {\mathbb Z}\) be arbitrary but fixed, and let \(\gamma _\ell := {\mathcal {G}}f(k\mathfrak {a},\ell \mathfrak {a})\) so that
$$ g_k= \sum _{\ell \in {\mathbb Z}} \gamma _\ell M_{\ell \mathfrak {a}} T_{k\mathfrak {a}}\psi =m \cdot T_{k\mathfrak {a}}\psi , $$
where m is the \(\dfrac{1}{\mathfrak {a}}\)-periodic Fourier-series defined by
$$ m(t) = \sum _{\ell \in {\mathbb Z}} \gamma _\ell e^{2\pi i \mathfrak {a}\ell t}. $$
In particular, covering I with \(\lceil \mathfrak {a}|I|\rceil \) translates of \([0,1/{\mathfrak {a}}]\), we obtain
$$\begin{aligned} \Vert g_k\Vert _{L^2(I)}^2&\le \Vert m\Vert _{L^2(I)}^2\Vert T_{k\mathfrak {a}}\psi \Vert _{L^\infty (I)}^2\nonumber \\&\le \lceil \mathfrak {a} |I|\rceil \Vert m\Vert _{L^2([0,1/{\mathfrak {a}}])}^2\Vert T_{k\mathfrak {a}}\psi \Vert _{L^\infty (I)}^2\nonumber \\&=\frac{\lceil \mathfrak {a} |I|\rceil }{\mathfrak {a}}\sum _{\ell \in {\mathbb Z}} |\gamma _\ell |^2 \cdot \Vert T_{k\mathfrak {a}}\psi \Vert _{L^\infty (I)}^2 \nonumber \\&\le 2(\tau +1)\sum _{\ell \in {\mathbb Z}} |\gamma _\ell |^2 \cdot \Vert T_{k\mathfrak {a}}\psi \Vert _{L^\infty (I)}^2 \end{aligned}$$
(5.4)
where we have used Parseval’s relation for the \(L^2\)-norm of m and the fact that \(\dfrac{\lceil \mathfrak {a} |I|\rceil }{\mathfrak {a}}\le |I|+\dfrac{1}{\mathfrak {a}}=2\tau +\sqrt{2}\).
We will now estimate the \(\ell ^2\)-sum over \(\gamma _\ell \). To that end note that,
$$\begin{aligned} \gamma _\ell= & \langle f, M_{\ell \mathfrak {a}} T_{k\mathfrak {a}}\varphi \rangle _{L^2} = \langle f, e^{2\pi i \mathfrak {a}^2 k\ell } T_{k\mathfrak {a}} M_{\ell \mathfrak {a}} \varphi \rangle _{L^2}\\= & e^{-\pi i k\ell } \langle T_{-k\mathfrak {a}} f, M_{\ell \mathfrak {a}} \varphi \rangle _{L^2} =e^{-\pi i k\ell } \langle \varphi ^{\frac{1}{2}}\cdot T_{-k\mathfrak {a}} f, M_{\ell \mathfrak {a}} \varphi ^{\frac{1}{2}}\rangle _{L^2}\\= & e^{-\pi i k\ell } \langle {\mathcal {F}}(\varphi ^{\frac{1}{2}} \cdot T_{-k\mathfrak {a}} f), {\mathcal {F}}(M_{\ell \mathfrak {a}} \varphi ^{\frac{1}{2}})\rangle _{L^2}. \end{aligned}$$
Now, since \({\mathcal {F}}(\varphi ^{\frac{1}{2}})= 2^{\frac{1}{8}} \varphi ^2\), we obtain \({\mathcal {F}}(M_{\ell \mathfrak {a}} \varphi ^{\frac{1}{2}})=2^{\frac{1}{8}} T_{\ell \mathfrak {a}}\varphi ^2\) thus
$$ \gamma _\ell = 2^\frac{1}{8} e^{-\pi i k\ell } \langle {\mathcal {F}}(\varphi ^{\frac{1}{2}} T_{-k\mathfrak {a}} f), T_{\ell \mathfrak {a}} \varphi ^2 \rangle _{L^2}. $$
Since \(\varphi ^2(x)=2^{\frac{1}{2}} e^{-2\pi x^2}\), it follows from Lemma 4.9 that
$$\begin{aligned} \begin{aligned} \sum _{\ell \in {\mathbb Z}} |\gamma _\ell |^2\le & 2^{\frac{5}{4}} \frac{\vartheta _3(0,e^{-\frac{\pi }{2}})}{2}\big \Vert {\mathcal {F}}(\varphi ^{\frac{1}{2}} T_{-k\mathfrak {a}} f)\big \Vert _{L^2}^2\\= & 2^{\frac{1}{4}} \vartheta _3(0,e^{-\frac{\pi }{2}}) \Vert f\cdot T_{k\mathfrak {a}}\varphi ^\frac{1}{2}\Vert _{L^2}^2\\\le & 1.69 \cdot \sup _{x\in {\mathbb R}} \Vert f\cdot T_x\varphi ^\frac{1}{2}\Vert _{L^2}^2. \end{aligned} \end{aligned}$$
(5.5)
On the other hand, by Lemma 4.6 we have that \(|\psi (t)| \le e^{-\frac{\pi |t|}{\sqrt{2}}}\). Thus, if \(\mathfrak {a}|k|>T>\tau \)
$$ \Vert T_{k\mathfrak {a}}\psi \Vert _{L^\infty (I)} \le \exp \{-\frac{\pi }{\sqrt{2}} (\mathfrak {a}|k|-\tau )\}. $$
Putting this inequality together with (5.5) into (5.4), we obtain
$$\begin{aligned} \Vert g_k\Vert _{L^2(I)} \le \sqrt{2(\tau +1)} \cdot e^{-\frac{\pi }{\sqrt{2}}(\mathfrak {a}|k|-\tau )} \cdot \sqrt{1.69} \sup _{x\in {\mathbb R}} \Vert f\cdot T_x\varphi ^{\frac{1}{2}}\Vert _{L^2}. \end{aligned}$$
(5.6)
Further, we have that
$$\begin{aligned} & \sum _{\begin{array}{c} k\in {\mathbb Z}\\ |k|\ge \sqrt{2} T+1 \end{array}} e^{-\frac{\pi }{\sqrt{2}} (\mathfrak {a}|k|-\tau )} = 2 e^{\frac{\pi \tau }{\sqrt{2}}} \sum _{k=\sqrt{2} T+1}^{\infty } e^{-\frac{\pi }{2}k} = 2 e^{\frac{\pi \tau }{\sqrt{2}}} \cdot \frac{e^{-\frac{\pi }{2}(\sqrt{2} T+1)}}{1-e^{-\frac{\pi }{2}}} \\ & \quad = 0.52\ldots \cdot e^{-\frac{\pi }{\sqrt{2}}(T-\tau )}. \end{aligned}$$
Thus, summing (5.6), we get the upper bound
$$\begin{aligned} \sum _{\begin{array}{c} k\in {\mathbb Z}\\ |k|\ge \sqrt{2} T+1 \end{array}} \Vert g_k\Vert _{L^2(I)} \le \sqrt{\tau +1} \cdot e^{-\frac{\pi }{\sqrt{2}}(T-\tau )} \cdot \sup _{x\in {\mathbb R}} \Vert f\cdot T_x\varphi ^{\frac{1}{2}}\Vert _{L^2} \end{aligned}$$
(5.7)
By adding up the three error terms (5.2), (5.3) and (5.7) we obtain the result.

6 Determining Relative Phase Changes

Suppose we are given (possibly noisy) samples of the spectrogram, that is \(({\mathcal {S}}f(\lambda ))_{\lambda \in \Omega }\) with \(\Omega \subseteq {\mathbb R}^2\) a set of sampling positions. The objective of the present section is to find a way to evaluate the tensor
$$ {\mathcal {T}}_u[{\mathcal {G}}f](p)={\mathcal {G}}f(p+u)\overline{{\mathcal {G}}f(p)}. $$
If \(u=0\), we again find the spectrogram, that is, \({\mathcal {T}}_0[{\mathcal {G}}f](p) = {\mathcal {S}}f(p)\). However, the relevant case is to go beyond \(u=0\) as this is where the information of relative phase changes between points p and \(p+u\) is stored.
We dedicate Sect. 6.1 to shed some light on why \({\mathcal {E}}\) is defined the way it is and to outline the further strategy. In Sect. 6.2 we analyze certain aspects regarding the continuity of \({\mathcal {E}}\) w.r.t. the argument G. In Sect. 6.3 we discuss implications for the task of estimating relative phase changes.

6.1 Motivation and Strategy

The purpose of this paragraph is to clarify the role of the evaluation operator \({\mathcal {E}}\). First we show that the spectrogram extends to an entire function of two variables.
Lemma 6.1
For \(f\in L^2({\mathbb R})\) let F be defined by
$$ F(z) = \mathcal {B}f(z_1-iz_2) (\mathcal {B}f)^*(z_1+iz_2) e^{-\pi z^2}, \quad z = \begin{pmatrix} z_1\\ z_2 \end{pmatrix} \in {\mathbb C}^2. $$
Then F is the entire extension of the spectrogram of f. That is to say, \(F\in \mathcal {O}({\mathbb C}^2)\) and \(F\big |_{{\mathbb R}^2}={\mathcal {S}}f\).
In the sequel, we will simply write \({\mathcal {S}}f\) instead of F.
Proof
That F is entire is obvious as it is a product of entire functions. From the relation between Gabor and Bargmann transform (4.4), it follows that for all \((x,y)\in {\mathbb R}^2\)
$$\begin{aligned} {\mathcal {G}}f(x,y)&= e^{-\pi i xy} \mathcal {B}f(x-iy) e^{-\frac{\pi }{2} (x^2+y^2)}, \end{aligned}$$
(6.1)
$$\begin{aligned} \overline{{\mathcal {G}}f(x,y)}&= e^{\pi i xy} (\mathcal {B}f)^*(x+iy) e^{-\frac{\pi }{2} (x^2+y^2)}. \end{aligned}$$
(6.2)
Thus, multiplying the two functions gives that
$$ {\mathcal {S}}f(x,y) = \mathcal {B}f(x-iy) (\mathcal {B}f)^*(x+iy) e^{-\pi (x^2+y^2)}. $$
This implies that \(F\big |_{{\mathbb R}^2}={\mathcal {S}}f\), and we are done.
The next result describes how evaluations of the function \({\mathcal {S}}f\) relate to the Gabor transform.
Lemma 6.2
Let \(f\in L^2({\mathbb R})\) and let \({\mathcal {S}}f\) be the holomorphic extension of the spectrogram of f as given in Lemma 6.1. For all \(p,u \in {\mathbb R}^2\) it holds that
$$ {\mathcal {T}}_u [{\mathcal {G}}f](p) = {\mathcal {E}}[{\mathcal {S}}f](p,u). $$
Proof
Let L and Q be defined as in Definition 2.2. With \(p=\begin{pmatrix} x\\ y \end{pmatrix}, u= \begin{pmatrix} a\\ b \end{pmatrix}\) we introduce \(\zeta := L(p,u) \in {\mathbb C}^2\). Notice that
$$ \zeta _1-i\zeta _2= (x-iy) + (a-ib), \qquad \zeta _1+i\zeta _2 = x+iy $$
and that
$$ \zeta _1^2 + \zeta _2^2 =(\zeta _1-i\zeta _2)(\zeta _1+i\zeta _2) = |p|^2 + (x+iy)(a-ib). $$
We use the identities (6.1) and (6.2) to evaluate
$$\begin{aligned} {\mathcal {S}}f(\zeta _1,\zeta _2)&= \mathcal {B}f(\zeta _1-i\zeta _2) (\mathcal {B}f)^*(\zeta _1+i\zeta _2) e^{-\pi (\zeta _1^2+\zeta _2^2)}\\&= \mathcal {B}f ((x+a)-i(y+b)) (\mathcal {B}f)^*(x-iy) e^{-\pi (|p|^2+(x+iy)(a-ib))}\\&= {\mathcal {G}}f(p+u) \overline{{\mathcal {G}}f(p)} \cdot e^{Q(p,u)}. \end{aligned}$$
This implies the claim.
Recall that the objective is to determine (or estimate) values of \({\mathcal {T}}_u [{\mathcal {G}}f](z)\). Assume for a moment we were given \({\mathcal {S}}f\), the holomorphic extension of the spectrogram, on all of \({\mathbb C}^2\). In this case, one simply has to plug in \({\mathcal {S}}f\) into \({\mathcal {E}}\) according to the preceding result.
In our situation however, the provided information is significantly weaker. First, we do not have direct access to \({\mathcal {S}}f\) outside of \({\mathbb R}^2\). While in theory, it is enough to know \({\mathcal {S}}f\) on \({\mathbb R}^2\) (or any open subset thereof) to determine it on all of \({\mathbb C}^2\), to perform this extension in practice is already a challenging task. To make things worse, we do not have access to \({\mathcal {S}}f\) on all of \({\mathbb R}^2\), but only to samples \(({\mathcal {S}}f(\lambda ))_{\lambda \in \Omega }\) with \(\Omega \subseteq {\mathbb R}^2\) finite. An additional difficulty is that those samples are corrupted by noise, but let us neglect this aspect for the moment.
As it turns out, \({\mathcal {S}}f\) not only extends holomorphically to \({\mathbb C}^2\), but this extension has also moderate growth with respect to the distance of a point \(z\in {\mathbb C}^2\) to \({\mathbb R}^2\). The following example demonstrates the importance of such growth in a one-dimensional setting.
Example 6.3
Let us define a family of entire functions \(F_\varepsilon \in \mathcal {O}({\mathbb C})\), \(\varepsilon >0\) by virtue of
$$ F_\varepsilon (z) = \varepsilon e^{-\frac{iz}{\varepsilon }}, \quad z\in {\mathbb C}. $$
Then, each \(F_\varepsilon \) is \(\varepsilon \)-close to the zero function on \({\mathbb R}\subseteq {\mathbb C}\), that is \( \Vert F_\varepsilon - 0\Vert _{L^\infty ({\mathbb R})}=\varepsilon . \) However, if \(z=a+ib\) with \(b>0\) we have that
$$ |F_\varepsilon (z)-0| = \varepsilon e^{\frac{b}{\varepsilon }}, $$
which blows up if \(\varepsilon \rightarrow 0\). If we consider \(F_\varepsilon \) to be an estimator for the holomorphic extension of the zero function, we find that while approximation of the given data improves for \(\varepsilon \rightarrow 0\) the performance of the estimator degenerates.
Instead, we may look for \(F\in \mathcal {O}({\mathbb C})\) such that \(M_F\) is finite on [0, 2b]. If F is \(\varepsilon \) close to 0 on \({\mathbb R}\), i.e., \(M_F(0)\le \varepsilon \) it follows from Hadamard’s Three Line Theorem that
$$ |F(a+ib) - 0| \le M_F(b) \le \sqrt{M_F(0) M_F(2b)} \le \sqrt{\varepsilon }\cdot \sqrt{M_F(2b)}. $$
This suggests that for F to be a reliable estimator it is the combination of both, i) a good approximation of the zero function on \({\mathbb R}\) and ii) it has moderate growth behaviour as quantified by \(M_F\).
We can now outline the approach we take for the problem at hand. First we are going to construct a set of Ansatz functions \(\mathcal {A}\subset \mathcal {O}({\mathbb C}^2)\) obeying the following conditions
  • we want \(\mathcal {A}\) to be a convex set as we want to formulate a convex problem. Furthermore, to make such a problem practically feasible, \(\mathcal {A}\) must by parameterized by finite dimensional objects; in our case the cone of positive semi-definite matrices of a certain size will play the role of the parameter space.
  • we need to make sure that \(\mathcal {A}\) has sufficient expressive capabilities; that is, that it contains at least one function F which meets both requirements. Namely, i) approximating \({\mathcal {S}}f\) well on \(\Omega \subseteq {\mathbb R}^2\) and ii) possessing beneficial growth behaviour of the type discussed above.
  • we need to set up the convex problem in a way to guarantee that its solution does satisfy both of these requirements.
Once we have identified \(F\in \mathcal {A}\) with moderate growth and which approximates the samples well, say
$$ \sup _{\lambda \in \Omega } |{\mathcal {S}}f(\lambda ) - F(\lambda )| \le \varepsilon $$
we want to use \(\mathcal {E}[F](p,u)\) (an object which we can access) to estimate \(\mathcal {E}[{\mathcal {S}}f](p,u) = {\mathcal {T}}_u[{\mathcal {G}}f](p)\) (an object which we want to access but cannot). As \({\mathcal {E}}\) is linear, the resulting estimation error is given by \( |{\mathcal {E}}[{\mathcal {S}}f - F](p,u)|.\) An application of Hadamard’s Three Line Theorem (Lemma 4.13) would then imply that
$$\begin{aligned} |{\mathcal {E}}[{\mathcal {S}}f - F](p,u)| \le C(p,u) \Vert {\mathcal {S}}f - F\Vert _{L^\infty ({\mathbb R}^2)}^{1/2} \end{aligned}$$
(6.3)
with C(pu) also depending on the growth behaviour of \({\mathcal {S}}f - F\). Unfortunately, we have no direct way to make sure that F is close to \({\mathcal {S}}f\) on all of \({\mathbb R}^2\), since we only have access to the values of the latter on a finite subset \(\Omega \in {\mathbb R}^2\). In the subsequent part, Sect. 6.2 we will employ sampling and cut-off arguments in order to establish an approximate version of (6.3). That is, an estimate of the form
$$ |{\mathcal {E}}[G](p,u)| \le C(p,u) \left( \Vert G\Vert _{\ell ^\infty (\Omega )}^{1/2} + \epsilon (\Omega ) \right) $$
with the error term \(\epsilon (\Omega )\) rapidly decaying as \(\Omega \) becomes richer.

6.2 Approximate Continuity of the Functional \({\mathcal {E}}\)

Throughout this section we shall employ the notation \(\gamma (z) = e^{-\frac{\pi }{8} z^2}\), \(z\in {\mathbb C}^2\). Moreover, we introduce the following subspace of entire functions
$$ \mathcal {O}^\infty ({\mathbb C}^d):= \{ G\in \mathcal {O}({\mathbb C}^d):\, M_G(r) <\infty , \forall r\ge 0\}. $$
In other words, \(\mathcal {O}^\infty ({\mathbb C}^d) \) is the set of holomorphic functions that are in \(H^\infty (T_r)\) for every tube \(T_r=\{x+iy\in {\mathbb C}^d:\ |{\text {Im}}(y)|<r\}\). For \(G\in \mathcal {O}^\infty ({\mathbb C}^2)\), we define a Gaussian cut-off at \(\tau \in {\mathbb R}^2\) by
$$ G_\tau (z):= G(z) \gamma (z-\tau ),\quad z\in {\mathbb C}^2. $$
The term “Gaussian cut-off” of course refers to the restriction of \(G_\tau \) to \({\mathbb R}^2\). Note that this restriction is in \(L^1({\mathbb R}^2)\) so that its Fourier transform \(\widehat{G_\tau }\) makes sense. We are interested in functions with controlled smoothness of cut-off \(G_\tau \) and introduce
$$\begin{aligned} \mathcal {V}(C,K):= \{G\in \mathcal {O}^\infty ({\mathbb C}^2): \, |\widehat{G_\tau }| \le K e^{-C|\cdot |^2},\,\forall \tau \in {\mathbb R}^2\}. \end{aligned}$$
The main result of the present section is the following continuity estimate for the evaluation operator \({\mathcal {E}}\).
Proposition 6.4
Let \(s,C,K>0\) and let \(\Omega \subseteq s{\mathbb Z}^2\). Suppose that \(p,u\in {\mathbb R}^2\) are such that
$$ {{\,\textrm{dist}\,}}\left( p+\frac{1}{2} u, s{\mathbb Z}^2{\setminus } \Omega \right) \ge \sqrt{\frac{C}{2\pi }} s^{-1}. $$
Then it holds for all \(G\in \mathcal {V}(C,K)\) that
$$\begin{aligned} |\mathcal {E}[G](p,u)|\le & 8.6 \left( \Vert G\Vert _{\ell ^\infty (\Omega )}^{1/2} + \left( \sqrt{M_G(0)} + 1.8 \sqrt{\frac{K}{C}} \right) e^{-\frac{C}{32 s^2}} \right) \\ & \quad \times \sqrt{M_G(|u|)} e^{-\frac{15\pi }{32} u^2}. \end{aligned}$$
Proof
Let \(\tau =p+\frac{1}{2} u\) and consider \(G_\tau (z)= G(z)\gamma (z-\tau )\).
First we rewrite \({\mathcal {E}}[G](p,u)\) in terms of \(G_\tau \): Let LQ be defined as in Definition 2.2. Note that \({\text {Re}}Q(p,u) = -\frac{\pi }{2} u^2\) and set
$$ \zeta _*:= L(p,u) = p + \frac{1}{2} \begin{pmatrix} 1 & -i\\ i& 1 \end{pmatrix} u. $$
Note that \(\zeta _*-\tau =\frac{i}{2} \begin{pmatrix} 0& -1\\ 1 & 0 \end{pmatrix}u\). Thus,
$$\begin{aligned} |{\mathcal {E}}[G](p,u)|&= |G(\zeta _*)| e^{-\frac{\pi }{2} u^2} \nonumber \\&= |G_\tau (\zeta _*)| \cdot \exp \left\{ \frac{\pi }{8} {\text {Re}}\{(\zeta _*-\tau )^2\} - \frac{\pi }{2} u^2 \right\} \nonumber \\&= |G_\tau (\zeta _*)| \cdot \exp \left\{ -\frac{17\pi }{32}u^2\right\} . \end{aligned}$$
(6.4)
Next, we apply Hadamard’s Three Line Theorem (Lemma 4.13, to be precise) to \(G_\tau \). To that end, note that \(G_\tau \in \mathcal {O}^\infty ({\mathbb C}^2)\) since \(G\in \mathcal {O}^\infty ({\mathbb C}^2)\). As \(|{\text {Im}}\zeta _*|=\frac{|u|}{2}\), we get that
$$ |G_\tau (\zeta _*)| \le M_{G_\tau }(|u|/2) \le \sqrt{M_{G_\tau }(0) M_{G_\tau }(|u|)} \le \Vert G_\tau \Vert _{L^\infty ({\mathbb R}^2)}^{1/2} \sqrt{M_{G_\tau }(|u|)}. $$
Moreover, as \(|\gamma (\zeta -\tau )| = \exp \left\{ -\frac{\pi }{8} ({\text {Re}}(\zeta )-\tau )^2 +\frac{\pi }{8} ({\text {Im}}\zeta )^2 \right\} \) we find that
$$\begin{aligned} M_{G_\tau }(r) = \sup _{|{\text {Im}}\zeta | = r} |G(\zeta ) \gamma (\zeta -\tau )| \le e^{\frac{\pi }{8} r^2} \cdot M_G(r). \end{aligned}$$
(6.5)
Next we want to replace the term \(\Vert G_\tau \Vert _{L^\infty ({\mathbb R}^2)}\) by its discrete variant. Applying Lemma 4.11 gives that
$$\begin{aligned} \Vert G_\tau \Vert _{L^\infty ({\mathbb R}^2)} \le 73 \Vert G_\tau \Vert _{\ell ^\infty (s{\mathbb Z}^2)} + 233 \frac{K}{C}e^{-\frac{C}{16s^2}}. \end{aligned}$$
Suppose that \(\lambda _0\in \Omega ^c:= s{\mathbb Z}^2\setminus \Omega \). By assumption we have that
$$ |\tau -\lambda _0| \ge \sqrt{\frac{C}{2\pi }} s^{-1}, $$
which implies that
$$ |G_\tau (\lambda _0)| = |G(\lambda _0)| e^{-\frac{\pi }{8} |\lambda _0-\tau |^2} \le M_G(0) \cdot e^{-\frac{C}{16\,s^2}}. $$
In particular, as \(\Vert G_\tau \Vert _{\ell ^\infty (s{\mathbb Z}^2)}\le \Vert G_\tau \Vert _{\ell ^\infty (\Omega )} + \Vert G_\tau \Vert _{\ell ^\infty (\Omega ^c)}\), we have that
$$\begin{aligned} \Vert G_\tau \Vert _{L^\infty ({\mathbb R}^2)} \le 73 \left( \Vert G_\tau \Vert _{\ell ^\infty (\Omega )} + \left( M_G(0)+ \frac{233 K}{73 C} \right) e^{-\frac{C}{16s^2}} \right) . \end{aligned}$$
(6.6)
It remains to combine the estimates:
$$\begin{aligned}&|{\mathcal {E}}[G](p,u)|\\&{\mathop {\le }\limits ^{(6.4)}} |G_\tau (\zeta _*)| \cdot \exp \left\{ -\frac{17\pi }{32} u^2 \right\} \\&{\mathop {\le }\limits ^{(6.5)}} \sqrt{M_G(|u|)} \cdot \Vert G_\tau \Vert _{L^\infty ({\mathbb R}^2)}^{1/2} \cdot \exp \left\{ -\frac{15\pi }{32} u^2 \right\} \\&{\mathop {\le }\limits ^{(6.6)}} \sqrt{73} \sqrt{M_G(|u|)} \left( \Vert G_\tau \Vert _{\ell ^\infty (\Omega )} + \left( M_G(0)+ \frac{233 K}{73 C} \right) e^{-\frac{C}{16s^2}} \right) ^{1/2}\cdot e^{-\frac{15\pi }{32} u^2}\\&\le 8.6 \left( \Vert G\Vert _{\ell ^\infty (\Omega )}^{1/2} + \left( \sqrt{M_G(0)} + 1.8 \sqrt{\frac{K}{C}} \right) e^{-\frac{C}{32 s^2}} \right) \cdot \sqrt{M_G(|u|)} e^{-\frac{15\pi }{32} u^2} \end{aligned}$$
This finishes the proof.

6.3 Implications

Recall the objective from the beginning of the section.
Given samples \(({\mathcal {S}}f(\lambda ))_{\lambda \in \Omega }\) we want to estimate relative phase changes
$$ {\mathcal {G}}f(p+u) \overline{{\mathcal {G}}f(p)} ={\mathcal {E}}[{\mathcal {S}}f](p,u). $$
The basic idea is – since \({\mathcal {S}}f\) is not directly available – to identify a suitable dummy \(F\in \mathcal {O}({\mathbb C}^2)\) in place of \({\mathcal {S}}f\), and use \({\mathcal {E}}[F]\) as an estimator for relative phase changes. This raises several questions. Most importantly, how to actually identify F and how accurate is the resulting estimator? The remainder of this paragraph is aimed at resolving these questions. To do that we first introduce a finite-dimensional cone of Ansatz functions, then formulate an associated convex problem (CP) and finally argue that the solution of this CP gives rise to an accurate estimator.
Throughout, let \(\Gamma \subseteq \mathfrak {a}{\mathbb Z}^2\) denote a finite set. The following lemma clarifies the relation between \(F_A\) and the Gabor transform.
Lemma 6.5
Let \(N=|\Gamma |\) and let \(a=(a_\lambda )_{\lambda \in \Gamma }\in {\mathbb C}^\Gamma \). The following properties hold:
i)
\(\Phi _{\lambda ,\mu }\) is the entire extension of \({\mathcal {G}}[\pi (\lambda )\varphi ]\overline{{\mathcal {G}}[\pi (\mu )\varphi ]}\). That is, for all \(p=(x,\omega )\in {\mathbb R}^2\subseteq {\mathbb C}^2\) it holds that
$$ \Phi _{\lambda ,\mu }(p) = {\mathcal {G}}[\pi (\lambda )\varphi ](x,\omega ) \cdot \overline{{\mathcal {G}}[\pi (\mu )\varphi ](x,\omega )}. $$
 
ii)
With \(A=a\otimes \bar{a}\) and \(f=\sum _{\lambda \in \Gamma }a_\lambda \pi (\lambda )\varphi \), we have that \(F_A\) is the entire extension of \({\mathcal {S}}f\). That is, for all \(p=(x,\omega )\in {\mathbb R}^2\subseteq {\mathbb C}^2\) it holds that
$$ F_A(p) = {\mathcal {S}}f(x,\omega ). $$
 
iii)
Suppose \(A\in \mathfrak {A}_+(\Gamma )\) with eigenvalues \(\alpha _1,\ldots ,\alpha _N\ge 0\) and with corresponding orthonormal basis of eigenvectors \(u_1,\ldots ,u_N\in {\mathbb C}^\Gamma \). Let
$$ f_k:= \sum _{\lambda \in \Gamma } u_k(\lambda ) \pi (\lambda )\varphi ,\quad k\in \{1,\ldots ,N\}. $$
Then, \(F_A\) is the entire extension of \(\sum _{k=1}^N \alpha _k {\mathcal {S}}f_k\). That is, for all \(p=(x,\omega )\in {\mathbb R}^2\subseteq {\mathbb C}^2\) it holds that
$$ F_A(p) = \sum _{k=1}^N \alpha _k {\mathcal {S}}f_k(x,\omega ). $$
 
Proof
First note that the first statement implies the second one. For this, one simply writes \({\mathcal {S}}f={\mathcal {G}}f\cdot \overline{{\mathcal {G}}f}\) and uses linearity of \(f\mapsto {\mathcal {G}}f\). The second statement then implies the third one by linearity of \(A\mapsto F_A\) once one writes \(\displaystyle A=\sum _{k=1}^N \alpha _k u_k\otimes \overline{u_k}\). Recalling (4.2), for \(\lambda =(a,b)\) we have that
$$\begin{aligned} {\mathcal {G}}[\pi (\lambda )\varphi ](x,\omega ) = e^{\pi i ab} e^{-\pi ix\omega } e^{\pi i (xb-\omega a)} e^{-\frac{\pi }{2} |z-\lambda |^2}. \end{aligned}$$
An elementary computation shows that
$$|z-\lambda |^2+|z-\mu |^2 = 2 \left| z-\frac{\lambda +\mu }{2} \right| ^2 + \frac{|\lambda -\mu |^2}{2}.$$
Hence, with \(\mu =(a',b')\) we get that
$$\begin{aligned} \big ({\mathcal {G}}[\pi (\lambda )\varphi ] \cdot \overline{{\mathcal {G}}[\pi (\mu )\varphi ]}\big ) (z)&= e^{\pi i(ab-a'b')} e^{\pi i [x (b-b')-\omega (a-a')]} e^{-\frac{\pi }{2} (|z-\lambda |^2+|z-\mu |^2)}\\&= e^{\pi i(ab-a'b')} e^{\pi i z \cdot \mathcal {J}(\lambda -\mu )} e^{\frac{\pi }{4} |\lambda -\mu |^2} e^{-\pi \left| z-\frac{\lambda +\mu }{2}\right| ^2} \end{aligned}$$
which coincides with \(\Phi _{\lambda ,\mu }(z)\).
Later on we want to apply Proposition 6.4 to \(G=F_A-{\mathcal {S}}f\). Before that we show how in this case the relevant quantities appearing in the right hand side can be controlled in terms of the parameterizing matrix A.
Lemma 6.6
Let \(A\in \mathfrak {A}_+(\Gamma )\). Moreover, let
$$ C = \frac{8\pi }{17}\quad \text {and}\quad K=272 \cdot \max _{\lambda \in \Gamma } A_{\lambda ,\lambda } $$
Then it holds that \(F_A\in \mathcal {V}(C,K)\) and moreover that
$$ M_{F_A}(r) \le 64.1 e^{2\pi r^2}\cdot \max _{\lambda \in \Gamma } A_{\lambda ,\lambda }, \quad r\ge 0. $$
Proof
Committing a slight abuse of notation we denote the restrictions of \(F_A\), \(\gamma \) and \(\Phi _{\lambda ,\mu }\) to \({\mathbb R}^2\) by the same symbols. For the first statement, we need to show for arbitrary \(\tau \in {\mathbb R}^2\) that
$$ |{\mathcal {F}}[F_A \cdot \gamma (\cdot -\tau )](\xi )| \le K e^{-C|\xi |^2},\,\quad \xi \in {\mathbb R}^2. $$
Let \(\lambda ,\mu \in {\mathbb R}^2\) be arbitrary but fixed. We set
$$ p =p(\lambda ,\mu )=\frac{4\lambda +4\mu +\tau }{9} \quad \text {and} \quad q=q(\lambda ,\mu )= \frac{\lambda +\mu -2\tau }{6}, $$
so that, after an elementary computation,
$$ \big ( x-\frac{\lambda +\mu }{2} \big )^2 + \frac{1}{8} \big ( x-\tau \big )^2 = \frac{9}{8}(x-p)^2 + q^2. $$
With this we rewrite
$$\begin{aligned} \Phi _{\lambda ,\mu }(x) \gamma (x-\tau )&= C(\lambda ,\mu ) e^{i\pi [\mathcal {J}(\lambda -\mu )]\cdot x}\cdot e^{-\pi \big (x-\frac{\lambda +\mu }{2} \big )^2} \cdot e^{-\frac{\pi }{8} (x-\tau )^2}\\&= C(\lambda ,\mu ) e^{-\pi q^2} \cdot \left( M_{\frac{1}{2} \mathcal {J}(\lambda -\mu )} T_p [e^{-\frac{9}{8} \pi \cdot ^2}]\right) (x). \end{aligned}$$
As \({\mathcal {F}}[e^{-\frac{9}{8} \pi \cdot ^2}](\xi ) = \frac{8}{9} e^{-\frac{8\pi }{9} \xi ^2}\), we get that the Fourier transform of the above function is given by
$$\begin{aligned} {\mathcal {F}}[\Phi _{\lambda ,\mu }(\cdot ) \gamma (\cdot -\tau )] (\xi ) = \frac{8}{9} C(\lambda ,\mu ) e^{-\pi q^2} \cdot \left( T_{\frac{1}{2} \mathcal {J}(\lambda -\mu )} M_{-p} [e^{-\frac{8}{9} \pi \cdot ^2}]\right) (\xi ). \end{aligned}$$
With this, application of the triangle inequality yields
$$\begin{aligned} & |{\mathcal {F}}[F_A(\cdot ) \gamma (\cdot -\tau )](\xi )| \le \Vert A\Vert _{\max } \sum _{\lambda ,\mu \in \Gamma } |{\mathcal {F}}[\Phi _{\lambda ,\mu } \gamma (\cdot -\tau )](\xi )| \\ & \quad \le \frac{8}{9} \Vert A\Vert _{\max } \sum _{\lambda ,\mu \in \mathfrak {a}{\mathbb Z}^2}. e^{ -\frac{\pi }{4} (\lambda -\mu )^2 - \frac{\pi }{36} (\lambda +\mu -2\tau )^2 -\frac{8\pi }{9} \left( \xi - \frac{1}{2}\mathcal {J}(\lambda -\mu ) \right) ^2}. \end{aligned}$$
Note that \(\mathcal {J}^2=-I\), that \(\mathcal {J}^{-1}=-\mathcal {J}\) and that
$$ {\left\{ \begin{array}{ll} \mathfrak {a}{\mathbb Z}^2 \times \mathfrak {a}{\mathbb Z}^2 \quad & \rightarrow \quad \mathfrak {a}{\mathbb Z}^2\times \mathfrak {a}{\mathbb Z}^2\\ (\lambda ,\mu ) \quad & \mapsto \quad (\lambda +\mu ,\lambda -\mu ) \end{array}\right. } $$
is injective (but not onto!). Thus, substituting \(u=\lambda +\mu \) and \(v=\lambda -\mu \) and \(\xi ' = \mathcal {J}\xi \) allows us to estimate the above sum from above by
$$\begin{aligned}&\sum _{u,v\in \mathfrak {a}{\mathbb Z}^2} \exp \left\{ -\frac{\pi }{4} v^2 -\frac{\pi }{36} (u-2\tau )^2 -\frac{8\pi }{9} (\xi -\frac{1}{2} \mathcal {J} v)^2\right\} \nonumber \\ =&\sum _{u,v\in \mathfrak {a}{\mathbb Z}^2} \exp \left\{ -\frac{\pi }{4} v^2 -\frac{\pi }{36} (u-2\tau )^2 -\frac{8\pi }{9} (\xi '-\frac{v}{2} )^2\right\} \nonumber \\ =&\left( \sum _{u\in \mathfrak {a}{\mathbb Z}^2} e^{-\frac{\pi }{36} (u-2\tau )^2}\right) \left( \sum _{v\in \mathfrak {a}{\mathbb Z}^2} e^{-\frac{\pi }{4} v^2 -\frac{8\pi }{9} (\xi '-\frac{v}{2})^2} \right) . \end{aligned}$$
(6.7)
As per Lemma 4.2 we have that
$$\begin{aligned} \sum _{u\in \mathfrak {a}{\mathbb Z}^2} e^{-\frac{\pi }{36} (u-2\tau )^2} \le \left( \sup _{t\in {\mathbb R}} \sum _{k\in {\mathbb Z}} e^{-\frac{\pi }{72} (k-t)^2 } \right) ^2 \le 72\vartheta _3(0, e^{-72\pi })^2. \end{aligned}$$
We rewrite the second sum in (6.7) as
$$\begin{aligned} \sum _{v\in \mathfrak {a}{\mathbb Z}^2} e^{-\frac{\pi }{4} v^2 -\frac{8\pi }{9} (\xi '-\frac{v}{2})^2} = \prod _{\ell =1}^2 \sum _{k\in {\mathbb Z}} \exp \left\{ -\frac{\pi }{8} k^2 -\frac{\pi }{9} \left( k - 2\sqrt{2} \xi '_\ell \right) ^2 \right\} . \end{aligned}$$
We consider for \(t\in {\mathbb R}\) arbitrary
$$\begin{aligned} \begin{aligned} \sum _{k\in {\mathbb Z}} \exp \left\{ -\frac{\pi }{8} k^2 -\frac{\pi }{9} \left( k - t\right) ^2 \right\}&= e^{-\frac{\pi }{17} t^2} \sum _{k\in {\mathbb Z}} \exp \{ -\frac{17}{72}\pi (k-\frac{8}{17}t)^2\}\\&\le e^{-\frac{\pi }{17}t^2} \cdot \sqrt{\frac{72}{17}} \vartheta _3(0, e^{-\frac{72}{17}\pi }), \end{aligned} \end{aligned}$$
where we once more made use of Lemma 4.2. Plugging in \(t=2\sqrt{2} \xi '_\ell \), \(\ell \in \{1,2\}\), we obtain by combining the above estimates that
$$\begin{aligned} & |{\mathcal {F}}[F_A(\cdot ) \gamma (\cdot -\tau )](\xi )| \\ & \quad \le \frac{8}{9} \Vert A\Vert _{\max } \cdot \frac{72^2}{17} \cdot \vartheta _3(0, e^{-72\pi })^2 \cdot \vartheta _3(0, e^{-\frac{72}{17}\pi })^2 \cdot e^{-\frac{8\pi }{17}\xi ^2 }. \end{aligned}$$
Recall that a positive definite matrix attains its maximum absolute value in the diagonal: \(\Vert A\Vert _{\max } =\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }\). Finally, as
$$ \frac{8}{9} \cdot \frac{72^2}{17} \cdot \vartheta _3(0, e^{-72\pi })^2 \cdot \vartheta _3(0, e^{-\frac{72}{17}\pi })^2<272, $$
we find that
$$\begin{aligned} |{\mathcal {F}}[F_A \cdot \gamma (\cdot -\tau )](\xi )| \le 272 \left( \max _{\lambda \in \Gamma } A_{\lambda ,\lambda }\right) e^{-\frac{8\pi }{17}\xi ^2}, \end{aligned}$$
which proves the first statement.
To prove the second statement, let \(\zeta \in {\mathbb C}^2\) such that \(|{\text {Im}}\zeta |=r\). We apply the triangle inequality to obtain
$$\begin{aligned} & |F_A(\zeta )| \le \Vert A\Vert _{\max } \cdot \sum _{\lambda ,\mu \in \Gamma } |\Phi _{\lambda ,\mu }(\zeta )| \nonumber \\ & \quad = \Vert A\Vert _{\max } \cdot \sum _{\lambda ,\mu \in \Gamma } e^{- \frac{\pi }{4} (\lambda -\mu )^2 -\pi {\text {Im}}\zeta \cdot \mathcal {J}(\lambda -\mu ) - \pi \left( {\text {Re}}\zeta -\frac{\lambda +\mu }{2}\right) ^2 + \pi ({\text {Im}}\zeta )^2 }. \end{aligned}$$
We substitute again \(v=\lambda -\mu \) and \(u=\lambda +\mu \), and notice that
$$\begin{aligned} -\frac{\pi }{4} v^2-\pi {\text {Im}}\zeta \cdot \mathcal {J}v+\pi ({\text {Im}}\zeta )^2&= -\pi \left( {\text {Im}}\zeta + \frac{1}{2} \mathcal {J}v \right) ^2 + 2\pi ({\text {Im}}\zeta )^2 \\&\le -\pi \left( {\text {Im}}\zeta + \frac{1}{2} \mathcal {J}v \right) ^2 + 2\pi r^2. \end{aligned}$$
With this (and recalling that \((\lambda ,\mu )\mapsto (\lambda +\mu ,\lambda -\mu )\) is injective on \(\mathfrak {a}{\mathbb Z}^2 \times \mathfrak {a}{\mathbb Z}^2\)),
$$\begin{aligned} |F_A(\zeta )| \le \Vert A\Vert _{\max } \cdot e^{2\pi r^2} \cdot \sum _{v\in \mathfrak {a}{\mathbb Z}^2} e^{-\pi \left( {\text {Im}}\zeta + \frac{1}{2} \mathcal {J}v\right) ^2} \cdot \sum _{u\in \mathfrak {a}{\mathbb Z}^2} e^{-\pi \left( {\text {Re}}\zeta -\frac{1}{2} u \right) ^2}. \end{aligned}$$
(6.8)
To bound the first sum on the right hand side we proceed similarly as before and resort to a one dimensional sum:
$$\begin{aligned} & \sum _{v\in \mathfrak {a}{\mathbb Z}^2} e^{-\pi \left( {\text {Im}}\zeta + \frac{1}{2} \mathcal {J}v\right) ^2} \le \left( \sup _{t\in {\mathbb R}} \sum _{k\in {\mathbb Z}} e^{-\pi (t+ \frac{k}{2\sqrt{2}})^2} \right) ^2 = \left( \sup _{t\in {\mathbb R}} \sum _{k\in {\mathbb Z}} e^{-\frac{\pi }{8}(t+ k)^2} \right) ^2\\ & \quad \le 8 \vartheta _3(0,e^{-8\pi })^2, \end{aligned}$$
where the last inequality is again a consequence of Lemma 4.2. An analogous argument shows that the second sum in (6.8) is upper bounded by the same quantity. Hence,
$$\begin{aligned} |F_A(\zeta )| \le \Vert A\Vert _{\max } e^{2\pi r^2} \cdot 8^2 \vartheta _3(0,e^{-8\pi })^4 = \Vert A\Vert _{\max } e^{2\pi r^2} \cdot 64.0\ldots \end{aligned}$$
Since \(\zeta \) with \(|{\text {Im}}\zeta |=r\) was arbitrary, we are done.
A similar estimate holds true for the holomorphic extension of a spectrogram.
Lemma 6.7
Let \(f\in L^2({\mathbb R})\), with \({\mathcal {S}}f\) (the entire extension of) its spectrogram. Then it holds that
$$ M_{{\mathcal {S}}f}(r) \le \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} \cdot e^{2\pi r^2}, \quad r\ge 0. $$
Proof
From the definition of \({\mathcal {S}}f\) in terms of \(\mathcal {B}f\) in Lemma 6.1, we obtain and estimate
$$\begin{aligned} |{\mathcal {S}}f(z)|= & |\mathcal {B}f(z_1-iz_2)| \cdot |\mathcal {B}f (\overline{z_1+i z_2})| \cdot e^{-\pi ({\text {Re}}(z_1^2)+{\text {Re}}(z_2)^2)}\\ & \quad \le \Vert {\mathcal {G}}f\Vert _{L^\infty }^2 \cdot \exp \left\{ \frac{\pi }{2} |z_1-iz_2|^2 + \frac{\pi }{2} |z_1+iz_2|^2 -\pi \bigl (|{\text {Re}}(z)|^2 - |{\text {Im}}(z)|^2\bigr )\right\} \end{aligned}$$
expressing \(\mathcal {B}f\) in terms of \({\mathcal {G}}f\) with (4.4). But \(\Vert {\mathcal {G}}f\Vert _{L^\infty }^2=\Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)}\) and
$$ |z_1-iz_2|^2+|z_1+iz_2|^2=2|z_1|^2+2|z_2|^2=2|{\text {Re}}(z)|^2+2|{\text {Im}}(z)|^2. $$
It follows that
$$ |{\mathcal {S}}f(z_1,z_2)| \le \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} e^{2\pi |{\text {Im}}(z)|^2}, $$
which implies the claim.
Evaluation of \(F_A\) can be expressed in terms of a matrix inner product. In view of Algorithm 1 this fact is relevant in order to see that the CP in step 1 can be rephrased as a SDP as well as for the practical implementation of step 1.
Lemma 6.8
Let \(\Gamma \subseteq \mathfrak {a}{\mathbb Z}^2\) be finite, let \(p\in {\mathbb R}^2\subseteq {\mathbb C}^2\) and let \(v\in {\mathbb C}^\Gamma \) be defined by
$$ v_\lambda = {\mathcal {G}}[\pi (\lambda )\varphi ](p)= \exp \left\{ -\frac{\pi }{2}(p-\lambda )^2 -\pi (x+a)(y-b)\right\} \quad \lambda \in \Gamma . $$
Then it holds for all \(A\in \mathfrak {A}_+(\Gamma )\) that
$$ F_A(p) = {{\,\textrm{tr}\,}}( \bar{v}\otimes v \cdot A) = \langle A, v \otimes \bar{v}\rangle _F. $$
Proof
Since \(A\mapsto F_A(p)\) is linear it suffices to consider the rank one case \(A= a\otimes \bar{a}\). In this case we have that
$$\begin{aligned} F_A(p)= & {\mathcal {S}}[\sum _{\lambda \in \Gamma } a_\lambda \pi (\lambda )\varphi ](p) = \big | \sum _{\lambda \in \Gamma } a_\lambda {\mathcal {G}}[\pi (\lambda )\varphi ](p) \big |^2 = \big | \sum _{\lambda \in \Gamma } a_\lambda v_\lambda \big |^2 \nonumber \\= & (a \cdot v)(\overline{a \cdot v}) = {{\,\textrm{tr}\,}}( a^H \bar{v} v^T a) = {{\,\textrm{tr}\,}}([\bar{v}\otimes v ] [a\otimes \bar{a}]), \end{aligned}$$
which implies the claim.
In the following, let \(f\in L^2({\mathbb R})\) and let \(s>0\). Moreover, let \(\Omega \subseteq s{\mathbb Z}^2\) and \(\Gamma \subseteq \mathfrak {a}{\mathbb Z}^2\) be finite sets. Given \(p\in {\mathbb R}^2\), we use the notation \(W_p=v\otimes \bar{v}\) with v defined as in Lemma 6.8.
Definition 6.9
(Associated Convex Problem (ACP)) Given a triple \((f,\Gamma ,\Omega )\) as above, and a tolerance parameter \(\varepsilon >0\) we define the Admissible Set as
$$ Adm_\varepsilon (f,\Gamma ,\Omega ) = \{A\in \mathfrak {A}_+(\Gamma ): \, |\langle A,W_p\rangle _F - {\mathcal {S}}f(p)| \le \varepsilon ,\, p\in \Omega \}. $$
The Associated Convex Problem (ACP) is then
$$\begin{aligned} \min _{A\in Adm_\varepsilon (f,\Gamma ,\Omega )} \quad \max _{\lambda \in \Gamma } A_{\lambda ,\lambda } \end{aligned}$$
(6.9)
Solving the ACP (6.9) amounts to finding among all admissible matrices the one such that the maximum of all diagonal entries is minimal. While the admissibility condition can be simply reformulated in terms of \(F_A\), that is,
$$ A \in Adm_\varepsilon (f,\Gamma ,\Omega ) \quad \Leftrightarrow \quad \Vert {\mathcal {S}}f-F_A\Vert _{\ell ^\infty (\Omega )} \le \varepsilon , $$
it is not yet clear why this particular objective function could be helpful. The next statement vindicates that this is indeed a good choice.
Proposition 6.10
Let \(f\in L^2({\mathbb R})\), let \(\varepsilon \in (0,e^{-1})\) and let \(s>0\) such that
$$\begin{aligned} s \le \frac{\sqrt{38\pi }}{36} \left( \ln \frac{1}{\varepsilon }\right) ^{-1/2}. \end{aligned}$$
(6.10)
Moreover, let \(\Omega \subseteq s{\mathbb Z}^2\), let \(\Gamma \subseteq \mathfrak {a}{\mathbb Z}^2\) and let \(p,u\in {\mathbb R}^2\) be such that
$$\begin{aligned} p+\frac{1}{2} u\in \mathcal {A}(s,\Omega ):=\{X\in {\mathbb R}^2:\ {{\,\textrm{dist}\,}}\left( X, s{\mathbb Z}^2\setminus \Omega \right) \ge \frac{\sqrt{19}}{9} s^{-1}\}. \end{aligned}$$
Cf. Fig. 6. Suppose that \(A\in Adm_\varepsilon (f,\Gamma ,\Omega )\) and let
$$\begin{aligned} c_0:= \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} + 64.1\big (\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }\big ). \end{aligned}$$
Then it holds that
$$\begin{aligned} |{\mathcal {E}}[F_A](p,u) - {\mathcal {T}}_u[{\mathcal {G}}f](p)| \le 8.6 (1+5.2 \sqrt{c_0})^2 \cdot \sqrt{\varepsilon }\cdot e^{\frac{17\pi }{32} u^2}. \end{aligned}$$
Proof
We want to apply Proposition 6.4 to \(G=F_A-{\mathcal {S}}f\). According to Lemma 4.4 and Lemma 6.7, respectively we have that
$$ {\mathcal {S}}f \in \mathcal {V}\left( \frac{38\pi }{81}, 8\Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} \right) \quad \text {and}\quad M_{{\mathcal {S}}f}(r) \le \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} e^{2\pi r^2}. $$
As per Lemma 6.6 we have that
$$\begin{aligned} F_A \in \mathcal {V}\left( \frac{8\pi }{17}, 272 \max _{\lambda \in \Gamma } A_{\lambda ,\lambda } \right) \quad \text {and} \quad M_{F_A}(r) \le 64.1 (\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }) e^{2\pi r^2}. \end{aligned}$$
Note that
$$ F_1 \in \mathcal {V}(C_1,K_1), \, F_2\in \mathcal {V}(C_2,K_2) \quad \Rightarrow \quad F_1-F_2 \in \mathcal {V}\big (\min \{C_1,C_2\}, K_1+K_2 \big ). $$
Thus, we get that
$$\begin{aligned} G \in \mathcal {V}\left( \frac{38\pi }{81}, 8\Vert {\mathcal {S}}f\Vert _{L^\infty } + 272 \max _{\lambda \in \Gamma } A_{\lambda ,\lambda } \right) \subseteq \mathcal {V}\left( \frac{38\pi }{81}, 8c_0\right) , \end{aligned}$$
and furthermore, we have that
$$\begin{aligned} M_{G}(r) \le M_{F_A}(r)+ M_{{\mathcal {S}}f}(r) \le \left( 64.1 \max _{\lambda \in \Gamma }A_{\lambda ,\lambda } + \Vert {\mathcal {S}}f\Vert _{L^\infty } \right) e^{2\pi r^2} \le c_0 e^{2\pi r^2}. \end{aligned}$$
Note that with \(C=\frac{38\pi }{81}\) we have that \(\sqrt{\frac{C}{2\pi }}=\frac{\sqrt{19}}{9}\). Proposition 6.4 therefore implies that
$$\begin{aligned} & |{\mathcal {E}}[G](p,u)| \\ & \quad \le 8.6 \left( \Vert G\Vert _{\ell ^\infty (\Omega )}^{1/2} + \left( \sqrt{M_G(0)} + 1.8 \sqrt{\frac{81\cdot 8c_0}{38\pi }} \right) e^{-\frac{38\pi }{81\cdot 32 s^2}} \right) \cdot \sqrt{M_G(|u|)} e^{-\frac{15\pi }{32} u^2}\\ & \quad \le 8.6 \left( \sqrt{\varepsilon }+ \sqrt{c_0} \left( 1+1.8 \sqrt{\frac{81\cdot 8}{38\pi }} \right) e^{-\frac{38\pi }{81\cdot 32 s^2}}\right) \cdot \sqrt{c_0} e^{\frac{17\pi }{32}u^2}\\ & \quad \le 8.6 \left( \sqrt{\varepsilon }+ 5.2 \sqrt{c_0} e^{-\frac{38\pi }{81\cdot 32 s^2}} \right) \cdot \sqrt{c_0} e^{\frac{17\pi }{32}u^2} \end{aligned}$$
Finally, Assumption (6.10) is equivalent to \( e^{-\frac{38\pi }{81\cdot 32\,s^2}} \le \sqrt{\varepsilon }\). Hence,
$$ |{\mathcal {E}}[G](p,u)| \le 8.6 (1+5.2 \sqrt{c_0})^2 \cdot \sqrt{\varepsilon }\cdot e^{\frac{17\pi }{32}u^2}, $$
as desired.
Next, we provide sufficient conditions for the set of admissible matrices to be non-empty as well as an upper bound for the objective function of a minimizer.
Proposition 6.11
Let \(f\in L^2({\mathbb R})\), let \(\varepsilon \in (0,1)\), \(s>0\) and let \(\Omega \subseteq s{\mathbb Z}^2\). Moreover let \(\Gamma \subseteq \mathfrak {a}{\mathbb Z}^2\) such that
$$\begin{aligned} {{\,\textrm{dist}\,}}\left( \Omega , \mathfrak {a}{\mathbb Z}^2\setminus \Gamma \right) \ge \sqrt{\frac{2}{\pi }\ln \left( \frac{53 \Vert {\mathcal {S}}f\Vert _{L^\infty }}{\varepsilon }\right) } + \frac{1}{2\sqrt{2}}. \end{aligned}$$
(6.11)
Then it holds that \(Adm_\varepsilon (f,\Gamma ,\Omega )\ne \emptyset \) and every solution A of the ACP satisfies
$$ \max _{\lambda \in \Gamma } A_{\lambda ,\lambda } \le 1.6 \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)}. $$
Proof
Let \(\psi \) denote the canonical dual of the Gabor frame \((\pi (\lambda )\varphi )_{\lambda \in \mathfrak {a}{\mathbb Z}^2}\). Moreover, let us define \(a\in {\mathbb C}^\Gamma \) by
$$ a_\lambda = \langle f, \pi (\lambda )\psi \rangle , \quad \lambda \in \Gamma $$
and set \(A=a\otimes \bar{a}\). We prove the statement by showing that
$$ \text {i)}\, A\in Adm_\varepsilon (f,\Gamma ,\Omega )\quad \text {and}\quad \text {ii)} \,\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }\le 1.6 \Vert {\mathcal {S}}f\Vert _{L^\infty }.$$
To prove i) let us consider \(\xi \in \Omega \) arbitrary but fixed. We need to show that
$$ |F_A(\xi ) - {\mathcal {S}}f(\xi )| \le \varepsilon . $$
Let R denote the quantity on the right hand side of (6.11) and note that by that assumption
$$ \mathfrak {a}{\mathbb Z}^2 {\setminus } \Gamma \subseteq \mathfrak {a}{\mathbb Z}^2 {\setminus } B_R(\xi ). $$
With \(g:= \sum _{\lambda \in \Gamma } a_\lambda \pi (\lambda )\varphi \) we have by the reconstruction formula (4.8) that
$$ {\mathcal {G}}[f-g] = \sum _{\lambda \in \mathfrak {a}{\mathbb Z}^2{\setminus } \Gamma } \langle f,\pi (\lambda )\psi \rangle \cdot {\mathcal {G}}[\pi (\lambda )\varphi ]. $$
We can estimate
$$\begin{aligned} |{\mathcal {G}}[f-g](\xi )|&\le \sup _{\lambda \in \mathfrak {a}{\mathbb Z}^2\setminus \Gamma }|\langle f,\pi (\lambda )\psi \rangle | \cdot \sum _{\lambda \in \mathfrak {a}{\mathbb Z}^2\setminus \Gamma } |{\mathcal {G}}[\pi (\lambda )\varphi ](\xi )| \nonumber \\&\le \sup _{\lambda \in {\mathbb R}^2}|\langle f,\pi (\lambda )\psi \rangle | \cdot \sum _{\lambda \in \mathfrak {a}{\mathbb Z}^2\setminus B_R(\xi )} e^{-\frac{\pi }{2} |\lambda -\xi |^2}. \end{aligned}$$
(6.12)
We use that the Gabor transform is unitary and that \(|{\mathcal {G}}[\pi (\lambda )\psi ](p)|= |{\mathcal {G}}\psi (p-\lambda )|\) in order to bound
$$\begin{aligned} & |\langle f,\pi (\lambda )\psi \rangle | = |\langle {\mathcal {G}}f, {\mathcal {G}}[\pi (\lambda )\psi ]\rangle _{L^2({\mathbb R}^2)} |\nonumber \\ & \quad \le \int _{{\mathbb R}^2} |{\mathcal {G}}f(y)| |{\mathcal {G}}\psi (y-\lambda )|\,\text{ d }y \le \Vert {\mathcal {G}}f\Vert _{L^\infty } \cdot \Vert {\mathcal {G}}\psi \Vert _{L^1} \le 1.23 \Vert {\mathcal {G}}f\Vert _{L^\infty }\nonumber \\ \end{aligned}$$
(6.13)
where the last inequality follows from Lemma 4.6.
We proceed with estimating the sum of gaussians. A simple computation shows that
$$ x\mapsto e^{-\frac{\pi }{2} |x-\xi |^2}, \quad x\in {\mathbb R}^2{\setminus } B_{\sqrt{\frac{2}{\pi }}} (\xi ) $$
is subharmonic. For every \(\lambda \in \mathfrak {a}{\mathbb Z}^2{\setminus } B_R(\xi )\) we have that
$$ B_{\frac{\mathfrak {a}}{2}}(\lambda ) \cap B_{\sqrt{\frac{2}{\pi }}}(\xi ) = \emptyset , $$
since \(|\lambda -\xi |\ge R \ge 1.2 > \frac{\mathfrak {a}}{2} + \sqrt{\frac{2}{\pi }}\). By subharmonicity we have for all such \(\lambda \) that
$$ e^{-\frac{\pi }{2}|\lambda -\xi |^2} \le \frac{1}{|B_{\mathfrak {a}/2}(\lambda )|} \int _{B_{\mathfrak {a}/2}(\lambda )} e^{-\frac{\pi }{2} |x-\xi |^2}\,\text{ d }x = \frac{8}{\pi }\int _{B_{\mathfrak {a}/2}(\lambda )} e^{-\frac{\pi }{2} |x-\xi |^2}\,\text{ d }x. $$
Since we have that
$$ \bigcup _{\lambda \in \mathfrak {a}{\mathbb Z}^2 {\setminus } B_R(\xi )} B_{\mathfrak {a}/2}(\lambda ) \subseteq {\mathbb R}^2 {\setminus } B_{R-\frac{\mathfrak {a}}{2}}(\xi ), $$
and since all these disks are pairwise disjoint, we can estimate
$$\begin{aligned} \sum _{\lambda \in \mathfrak {a}{\mathbb Z}^2\setminus B_R(\xi )} e^{-\frac{\pi }{2} |\lambda -\xi |^2}&\le \frac{8}{\pi }\int _{{\mathbb R}^2 \setminus B_{R-\frac{\mathfrak {a}}{2}}(\xi )} e^{-\frac{\pi }{2} |x-\xi |^2}\,\text{ d }x \\&= \frac{8}{\pi }\int _{{\mathbb R}^2 \setminus B_{R-\frac{\mathfrak {a}}{2}}(0)} e^{-\frac{\pi }{2} |x|^2}\,\text{ d }x = \frac{16}{\pi }e^{-\frac{\pi }{2} (R-\mathfrak {a}/2)^2}. \end{aligned}$$
Thus, by injecting this and (6.13) into (6.12), we obtain
$$ |{\mathcal {G}}[f-g](\xi )| \le 6.3 \cdot \frac{16}{\pi } \Vert {\mathcal {G}}f\Vert _{L^\infty } e^{-\frac{\pi }{2} (R-\mathfrak {a}/2)^2}. $$
With this we can bound
$$\begin{aligned} |{\mathcal {S}}f(\xi ) - F_A(\xi )|&= ||{\mathcal {G}}f(\xi )| - |{\mathcal {G}}g(\xi )|| \cdot (|{\mathcal {G}}f(\xi )| + |{\mathcal {G}}g(\xi )|)a\\&\le |{\mathcal {G}}[f-g](\xi )| \cdot (2 |{\mathcal {G}}f(\xi )| + |{\mathcal {G}}[f-g](\xi )|)\\&\le 6.3 \left( 2 + 6.3\right) \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} e^{-\frac{\pi }{2} (R-\mathfrak {a}/2)^2} \\&\le 53 \Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} e^{-\frac{\pi }{2} (R-\mathfrak {a}/2)^2} \end{aligned}$$
which is bounded by \(\varepsilon \) according to the choice of R.
Part ii) now follows directly from estimate (6.13): For arbitrary \(\lambda \in \Gamma \) we have that
$$ A_{\lambda ,\lambda } = |a_\lambda |^2 = |\langle f, \pi (\lambda )\psi \rangle |^2 \le 1.23^2 \Vert {\mathcal {S}}f\Vert _{L^\infty }, $$
which – as \(1.23^2 < 1.6\) – implies the claimed inequality.

7 Proof of Theorem 2.8

Proof of Theorem 2.8
We break up the proof into various steps and show that
a)
Step 1 is feasible, and any solution A satisfies \(A\in Adm_{\frac{3\varepsilon }{2}}(f,\Gamma ,\Omega )\) and
$$\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }\le 1.6.$$
 
b)
For all \((\lambda ,\lambda ') \in \mathcal {P}\) it holds that
$$\begin{aligned} |T_{\lambda ',\lambda } - {\mathcal {T}}_{\lambda '-\lambda } [{\mathcal {G}}f](\lambda )| \le \varepsilon ' = (3.1\cdot 10^4) \sqrt{\varepsilon } e^{\frac{17\pi }{32}r^2}. \end{aligned}$$
(7.1)
 
c)
There exists a feasible Y for the CP in step 2 (see (2.2)).
 
d)
Any such Y has a simple largest eigenvalue, and the corresponding eigenvector v obeys (2.8).
 
To establish a) we first observe that if \(A\in \mathfrak {A}_+(\Gamma )\) is feasible for step 1, then by the triangle inequality
$$ \Vert F_A - {\mathcal {S}}f\Vert _{\ell ^\infty (\Omega )}\le \Vert F_A-\sigma \Vert _{\ell ^\infty (\Omega )} + \Vert \sigma -{\mathcal {S}}f\Vert _{\ell ^\infty (\Omega )} \le \varepsilon + \frac{\varepsilon }{2}, $$
i.e. \(A\in Adm_\varepsilon (f,\Gamma ,\Omega )\). It remains to show that there exists \(A\in Adm_{\frac{\varepsilon }{2}}(f,\Gamma ,\Omega )\) which further satisfies the inequality \(\max _{\lambda \in \Gamma }A_{\lambda ,\lambda }\le 1.6\), as by the triangle inequality such a matrix is then feasible for step 1.
By Proposition 6.11 (applied to \(\frac{\varepsilon }{2}\)) we only need to show that
$$\begin{aligned} {{\,\textrm{dist}\,}}\left( \Omega , \mathfrak {a}{\mathbb Z}^2\setminus \Gamma \right) \ge \sqrt{\frac{2}{\pi }\ln \left( \frac{2\cdot 53}{\varepsilon }\right) } + \frac{1}{2\sqrt{2}}. \end{aligned}$$
Note that for every \(\varepsilon \in (0,1)\) the right hand side can be bounded according to
$$\begin{aligned} & \sqrt{\frac{2}{\pi } \ln \left( \frac{106}{\varepsilon }\right) } + \frac{1}{2\sqrt{2}} = \sqrt{\frac{2}{\pi } \ln (106) - \frac{2}{\pi } \ln \varepsilon } + \frac{1}{2\sqrt{2}}\nonumber \\ & \quad \le \sqrt{\frac{2}{\pi } \ln (106)}+ \left( -\sqrt{\frac{2}{\pi }} \ln \varepsilon \right) ^{1/2} + \frac{1}{2\sqrt{2}} < 2.1 + 0.9 \sqrt{\ln \frac{1}{\varepsilon }} \end{aligned}$$
By construction of the sets and with condition (2.7) we have
$$ {{\,\textrm{dist}\,}}\left( \Omega , \mathfrak {a}{\mathbb Z}^2{\setminus } \Gamma \right) \ge R \ge 2.1 + 0.9 \sqrt{\ln \frac{1}{\varepsilon }}, $$
and we are done.
We proceed with the proof of statement b). Let \((\lambda ,\lambda ')\in \mathcal {P}\) be arbitrary but fixed, and denote \(p=\lambda \) and \(u=\lambda '-\lambda \). According to Proposition 6.10 (with \(1.5 \varepsilon \) instead of \(\varepsilon \)), it holds that
$$\begin{aligned} & |T_{\lambda ,\lambda '}- {\mathcal {T}}_{\lambda '-\lambda }[{\mathcal {G}}f](\lambda )| = |\mathcal {E}[F_A](p,u)-{\mathcal {T}}_u[{\mathcal {G}}f](p)|\nonumber \\ & \quad \le 8.6 (1+5.2\sqrt{c_0})^2\cdot \sqrt{1.5\varepsilon }\cdot e^{\frac{17\pi }{32}u^2} \end{aligned}$$
(7.2)
with (recalling that A is a solution of the CP in step 1)
$$\begin{aligned} c_0 =\Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)}+64.1(\max _{\lambda \in \Gamma } A_{\lambda ,\lambda }) \le 1+64.1\cdot 1.6 < 104 \end{aligned}$$
(7.3)
provided that the following three conditions hold:
i)
\(1.5 \varepsilon < e^{-1}\),
 
ii)
\(s \le \frac{\sqrt{38\pi }}{36} \left( \ln \frac{2}{3\varepsilon } \right) ^{-\frac{1}{2}}\), and
 
iii)
\(p+\frac{1}{2} u \in \mathcal {A}(s,\Omega )\).
 
As \(\frac{\sqrt{38\pi }}{36}=0.30\ldots \), condition ii) follows directly from assumption (2.6). Since \(\Vert {\mathcal {S}}f\Vert _{L^\infty ({\mathbb R}^2)} \le 1\), assumption (2.4) implies that
$$ \varepsilon \le \left[ \frac{e^{-\frac{17\pi }{32}r^2}}{1.33\cdot 10^5} \frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{|\Lambda |^2} \right] ^2 \le (1.33\cdot 10^5)^{-2}, $$
which implies i).
To verify iii), recall that \(|u|\le r\) and that \(p\in \Lambda \subseteq [-T,T]\times [-S,S]\). As
$$ s{\mathbb Z}^2{\setminus } \Omega \subseteq {\mathbb R}^2 {\setminus } ([-T-R,T+R]\times [-S-R,S+R]), $$
we have with (2.7) that
$$ {{\,\textrm{dist}\,}}\left( p+\frac{1}{2} u, s{\mathbb Z}^2{\setminus } \Omega \right) \ge R - \frac{r}{2} \ge \frac{1}{2\,s} > \frac{\sqrt{19}}{9}s^{-1}, $$
which shows that indeed \(p+\frac{1}{2} u \in \mathcal {A}(s,\Omega )\). Finally, combining (7.2) and (7.3) implies that
$$ |T_{\lambda ,\lambda '}-{\mathcal {T}}_{\lambda '-\lambda }[{\mathcal {G}}f](\lambda )|\le (3.1\cdot 10^4)\cdot \sqrt{\varepsilon } e^{\frac{17\pi }{32}u^2} $$
It follows directly from the estimate (7.1) that
$$ Y=\big ({\mathcal {G}}f(\lambda )\overline{{\mathcal {G}}f(\lambda ')} \big )_{\lambda ,\lambda '\in \Lambda } $$
meets the constraints of step 2. Thus, statement c) holds.
To prove part d) we apply Corollary 4.17, which states that
$$\begin{aligned} \min _{\theta \in {\mathbb R}}\left| \sqrt{{{\,\textrm{tr}\,}}(Y)}v - e^{i\theta } ({\mathcal {G}}f(\lambda ))_{\lambda \in \Lambda } \right| \le \left( 1+2\sqrt{6} \sqrt{\frac{B}{\lambda _2(f,\Lambda ,r)}} \right) \sqrt{\varepsilon '}, \end{aligned}$$
(7.4)
provided that
$$\begin{aligned} \varepsilon ' \le \min \left\{ 1, \frac{|\Lambda |^2 \lambda _2(f,\Lambda ,r)}{12 B} \right\} \times \frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{|\Lambda |^2} \end{aligned}$$
(7.5)
where \(B=\sum _{(u,v)\in \Lambda } |\mathcal {L}_{u,v}|\) with \(\mathcal {L}\) the Laplacian of the signal associated graph. By the way the graph is defined, we have that the maximal degree (i.e., the maximal number of neighbors a vertex can have) is bounded from above by
$$ \big | \overline{B_r(0)}\cap \mathfrak {a}{\mathbb Z}^2| = |\overline{B_{\sqrt{2} r}(0)} \cap {\mathbb Z}^2 | \le (2\sqrt{2} r)^2 = 8r^2. $$
Recall that the Laplacian is given by
$$ \mathcal {L}_{\lambda ,\mu } = {\left\{ \begin{array}{ll} \sum _{\lambda '\sim \lambda } |{\mathcal {G}}f(\lambda ')|^2, \quad & \mu =\lambda \\ - |{\mathcal {G}}f(\lambda )| |{\mathcal {G}}f(\mu )|, \quad & \mu \sim \lambda \\ 0 & \text {otw.} \end{array}\right. } $$
Since \(|xy|\le \dfrac{1}{2} (|x|^2+|y|^2)\) we can estimate
$$\begin{aligned} B&\le \sum _{\lambda \in \Lambda } \left( \sum _{\begin{array}{c} \lambda '\in \Lambda \\ \lambda '\sim \lambda \end{array}} |{\mathcal {G}}f(\lambda ')|^2 + \dfrac{1}{2}\sum _{\begin{array}{c} \mu \in \Lambda \\ \mu \sim \lambda \end{array}} (|{\mathcal {G}}f(\lambda )|^2 + |{\mathcal {G}}f(\mu )|^2) \right) \\&= \sum _{\lambda \in \Lambda } \sum _{\begin{array}{c} \lambda '\in \Lambda \\ \lambda '\sim \lambda \end{array}} |{\mathcal {G}}f(\lambda ')|^2 + \dfrac{1}{2} \left( \sum _{\lambda \in \Lambda } |{\mathcal {G}}f(\lambda )|^2 \right) \left( \sum _{\begin{array}{c} \mu \in \Lambda \\ \mu \sim \lambda \end{array}} 1\right) + \dfrac{1}{2}\sum _{\lambda \in \Lambda } \sum _{\begin{array}{c} \mu \in \Lambda \\ \mu \sim \lambda \end{array}} |{\mathcal {G}}f(\mu )|^2\\&= 2 \left( \sum _{\lambda \in \Lambda } |{\mathcal {G}}f(\lambda )|^2 \right) \left( \sum _{\begin{array}{c} \mu \in \Lambda \\ \mu \sim \lambda \end{array}} 1\right) \\&\le 16 r^2 \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2 \end{aligned}$$
which implies that
$$ \frac{|\Lambda |^2 \lambda _2(f,\Lambda ,r)}{12B} \ge \frac{|\Lambda |^2 \lambda _2(f,\Lambda ,r)}{192 r^2 \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}. $$
Together with (2.5) and (2.4) we get that
$$\begin{aligned} \varepsilon '&= (3.1\times 10^4)e^{\frac{17\pi }{32}r^2} \sqrt{\varepsilon }\\&\le \min \left\{ \frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{|\Lambda |^2}, \frac{ \lambda _2(f,\Lambda ,r)}{192 r^2 }\right\} \\&= \min \left\{ 1 , \frac{|\Lambda |^2 \lambda _2(f,\Lambda ,r)}{192 r^2 \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )^2}} \right\} \times \frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{|\Lambda |^2}, \end{aligned}$$
which implies (7.5)
Rewriting the right hand side of (7.4) in terms of \(\varepsilon \) and making use of the estimate for B, further implies
$$\begin{aligned} \min _{\theta \in {\mathbb R}}\Big |\sqrt{{{\,\textrm{tr}\,}}(Y)}v&- e^{i\theta } ({\mathcal {G}}f(\lambda ))_{\lambda \in \Lambda } \Big |\\&\le \left( 1+2\sqrt{6} \sqrt{\frac{16 r^2 \Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \times \sqrt{3.1\cdot 10^4} \, e^{\frac{17\pi }{64}r^2} \root 4 \of {\varepsilon }\\&\le 177 \cdot e^{0.84 r^2} \cdot \left( 1+20 r \sqrt{\frac{\Vert {\mathcal {G}}f\Vert _{\ell ^2(\Lambda )}^2}{\lambda _2(f,\Lambda ,r)}} \right) \cdot \root 4 \of {\varepsilon }, \end{aligned}$$
which finishes the proof.

8 Conclusion

In this paper, we have described the first algorithm that provides an approximate solution of the phase retrieval problem for the (continuous) Gabor transform with error bounds. More precisely, given only finite samples \(\mathcal {S}=\{|{\mathcal {G}}f(\lambda ),\lambda \in \Lambda \}|\) with \(\Lambda =\mathfrak {a}{\mathbb Z}^2\cap \mathcal {R}\), \(\mathcal {R}\) a rectangle, of the Gabor transform of some function \(f\in L^2({\mathbb R})\), our algorithm is able to reconstruct an approximation \(f_*\) of f, even if the samples are corrupted with noise. The main result in this paper is that we are able to prove that \(f_*\) is indeed an approximation of f over a large proportion of \(\mathcal {R}\), without having to impose any structural model on f, unlike previously available results. The error bound depends only on the two key expected quantities: the noise level and the connectivity of the graph of \({\mathcal {G}}f\) which is here measured through a spectral property of a finite graph generated by the data, and can thus be explicitly computed.
This work leads us to raise several questions:
– What is the sharpness of the error bounds. We have decided to give explicit values to all numerical constants and have been slightly conservative in our computations to keep them readable. They are within reasonable order though too big for practical applications. Numerical simulations lead to better errors, is there a way to significantly improve our bounds? A second factor that needs to be improved is the dependence on the noise level \(\varepsilon \). Since the error of size \(\varepsilon \) is on the squared spectrogram one should expect the reconstruction error to be of the order \(\varepsilon ^{1/2}\). In our results however, we have an extra square root as the error dependence is of the order of \(\varepsilon ^{1/4}\). The reason for this lies in the way we apply Hadamard’s Three Line Theorem. In principle, on could adapt the argument and push the power as close to 1/2 as desired. This would however cause the numerical constant to blow up. The way we apply the argument may be understood as a canonical compromise balancing the numerical constant and the power on \(\varepsilon \).
– The focus of this paper is on the proof of the error bounds. The implementation we have made here is mainly to illustrate that the algorithm actually works and that our bounds are pessimistic (which seems to be the case for most algorithms used for phase retrieval). We hope this work will trigger sufficient interest from specialists to lead to improvements in the algorithm and its implementation. In particular, we think that automatic calibration of parameters should avoid to run into feasibility issues.
– Would the addition of a particular signal model significantly improve the error bound ? E.g. does this happen if we impose f to be in some shift-invariant space like in [26, 27]?
We think that having shown that approximate reconstruction (without signal model) and based only on finite samples should trigger new hope in solving continuous phase retrieval problems via discretisations (rather than discrete analogues), despite recent rather negative results in that directions.

Acknowledgements

The authors wish to thank the anonymous referees for their careful reading of the manuscript and their comments. For M.R., this research was funded in whole or in part by the Austrian Science Fund (FWF): 10.55776/J4523. Ce travail a bénéficié d’une aide de l’État attribué à l’Université de Bordeaux en tant qu’Initiative d’excellence, au titre du plan France 2030.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Footnotes
1
we recall the definition of the Laplacian of a vertex weighted graph in Sect. 4.6
 
2
the assumption that \(\Vert {\mathcal {S}}f\Vert _{L^\infty }\le 1\) is purely there for aesthetic reasons in order to keep bounds and implicit constants as simple as possible
 
3
in the sense that the upper bound provided by Theorem 2.8 is then the tightest possible.
 
Literature
1.
go back to reference Alaifari R., Philipp Grohs P.: Phase retrieval in the general setting of continuous frames for Banach spaces. SIAM J. Math. Anal., 49(3), 1895–1911 (2017).MathSciNetCrossRef Alaifari R., Philipp Grohs P.: Phase retrieval in the general setting of continuous frames for Banach spaces. SIAM J. Math. Anal., 49(3), 1895–1911 (2017).MathSciNetCrossRef
2.
go back to reference Alaifari R., Wellershoff M.: Stability estimates for phase retrieval from discrete Gabor measurements. J. Fourier Anal. Appl., 27(2), 1-31 (2021).MathSciNetCrossRef Alaifari R., Wellershoff M.: Stability estimates for phase retrieval from discrete Gabor measurements. J. Fourier Anal. Appl., 27(2), 1-31 (2021).MathSciNetCrossRef
3.
go back to reference Alaifari R., Wellershoff M.: Phase retrieval from sampled Gabor transform magnitudes: counterexamples. J. Fourier Anal. Appl., 28(1), 1-8 (2022).MathSciNetCrossRef Alaifari R., Wellershoff M.: Phase retrieval from sampled Gabor transform magnitudes: counterexamples. J. Fourier Anal. Appl., 28(1), 1-8 (2022).MathSciNetCrossRef
4.
go back to reference Bendory T., Eldar Y. C., Boumal N.: Non-convex phase retrieval from STFT measurements. IEEE Trans. Inform. Theory, 64(1), 467–484 (2018).MathSciNetCrossRef Bendory T., Eldar Y. C., Boumal N.: Non-convex phase retrieval from STFT measurements. IEEE Trans. Inform. Theory, 64(1), 467–484 (2018).MathSciNetCrossRef
5.
go back to reference Bonnefont M.: Poincaré inequality with explicit constant in dimension \(d \ge 1\), 2022. Global Sensitivity Analysis and Poincaré inequalities (Summer school, Toulouse). Bonnefont M.: Poincaré inequality with explicit constant in dimension \(d \ge 1\), 2022. Global Sensitivity Analysis and Poincaré inequalities (Summer school, Toulouse).
6.
go back to reference Cahill J., Casazza, P. G., Daubechies I.: Phase retrieval in infinite-dimensional Hilbert spaces. Trans. Amer. Math. Soc. Ser. B, 3, 63–76 (2016).MathSciNetCrossRef Cahill J., Casazza, P. G., Daubechies I.: Phase retrieval in infinite-dimensional Hilbert spaces. Trans. Amer. Math. Soc. Ser. B, 3, 63–76 (2016).MathSciNetCrossRef
7.
go back to reference Candès E. J., Eldar Y. C., Strohmer T., Voroninski V.: Phase retrieval via matrix completion. SIAM Review, 57(2), 225–251 (2015).MathSciNetCrossRef Candès E. J., Eldar Y. C., Strohmer T., Voroninski V.: Phase retrieval via matrix completion. SIAM Review, 57(2), 225–251 (2015).MathSciNetCrossRef
8.
go back to reference Candès E. J., Strohmer T., Voroninski V.: PhaseLift: exact and stable signal recovery from magnitude measurements via convex programming. Comm. Pure Appl. Math., 66(8), 1241–1274 (2013).MathSciNetCrossRef Candès E. J., Strohmer T., Voroninski V.: PhaseLift: exact and stable signal recovery from magnitude measurements via convex programming. Comm. Pure Appl. Math., 66(8), 1241–1274 (2013).MathSciNetCrossRef
9.
go back to reference Cheeger J.: A lower bound for the smallest eigenvalue of the Laplacian, Princeton University Press, Princeton, 1971. Cheeger J.: A lower bound for the smallest eigenvalue of the Laplacian, Princeton University Press, Princeton, 1971.
10.
go back to reference Christensen O.: An introduction to frames and Riesz bases. Applied and Numerical Harmonic Analysis. Birkhäuser/Springer, second edition (2016). Christensen O.: An introduction to frames and Riesz bases. Applied and Numerical Harmonic Analysis. Birkhäuser/Springer, second edition (2016).
11.
go back to reference Chung F. R. K.: Spectral graph theory. Regional conference series in mathematics 92. In: Published for the Conference board of the mathematical sciences by the American mathematical society (1997). Chung F. R. K.: Spectral graph theory. Regional conference series in mathematics 92. In: Published for the Conference board of the mathematical sciences by the American mathematical society (1997).
12.
go back to reference Chung F. R. K, Langlands R.P.: A combinatorial Laplacian with vertex weights. J. Combin. Theory Ser. A, 75(2), 316–327 (1996).MathSciNetCrossRef Chung F. R. K, Langlands R.P.: A combinatorial Laplacian with vertex weights. J. Combin. Theory Ser. A, 75(2), 316–327 (1996).MathSciNetCrossRef
13.
go back to reference Daubechies I., Grossmann A.: Frames in the Bargmann space of entire functions. Comm. Pure Appl. Math., 41(2), 151–164 (1988).MathSciNetCrossRef Daubechies I., Grossmann A.: Frames in the Bargmann space of entire functions. Comm. Pure Appl. Math., 41(2), 151–164 (1988).MathSciNetCrossRef
14.
go back to reference Demanet L., Jugnon V.: Convex recovery from interferometric measurements. IEEE Trans. Comput. Imaging, 3(2), 282–295 (2017).MathSciNetCrossRef Demanet L., Jugnon V.: Convex recovery from interferometric measurements. IEEE Trans. Comput. Imaging, 3(2), 282–295 (2017).MathSciNetCrossRef
15.
go back to reference Eldar Y. C., Sidorenko P.,. Mixon, D. G, Barel S., Cohen O.: Sparse phase retrieval from short-time Fourier measurements. IEEE Signal Processing Letters, 22(5), 638–642 (2015).CrossRef Eldar Y. C., Sidorenko P.,. Mixon, D. G, Barel S., Cohen O.: Sparse phase retrieval from short-time Fourier measurements. IEEE Signal Processing Letters, 22(5), 638–642 (2015).CrossRef
16.
go back to reference Escudero L. A., Feldheim N., Koliander G., Romero J. L.: Efficient computation of the zeros of the Bargmann transform under additive white noise. Found. Comput. Math., 24(1), 279—312 (2024).MathSciNetCrossRef Escudero L. A., Feldheim N., Koliander G., Romero J. L.: Efficient computation of the zeros of the Bargmann transform under additive white noise. Found. Comput. Math., 24(1), 279—312 (2024).MathSciNetCrossRef
18.
go back to reference Fienup J. R.: Phase retrieval algorithms: a comparison. Appl. Opt., 21(15), 2758–2769 (1982).CrossRef Fienup J. R.: Phase retrieval algorithms: a comparison. Appl. Opt., 21(15), 2758–2769 (1982).CrossRef
19.
go back to reference Gerchberg R. W.: A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 35, 237–246 (1972). Gerchberg R. W.: A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik, 35, 237–246 (1972).
20.
go back to reference Godeme J.-J., Fadili J., Buet X., Zerrad M., Lequime M., Amra C.: Provable phase retrieval with mirror descent. SIAM J. Imaging Sci, 16, 1106–1141 (2023).MathSciNetCrossRef Godeme J.-J., Fadili J., Buet X., Zerrad M., Lequime M., Amra C.: Provable phase retrieval with mirror descent. SIAM J. Imaging Sci, 16, 1106–1141 (2023).MathSciNetCrossRef
21.
go back to reference Griffin D. W., Jae Lim J.: Signal estimation from modified short-time Fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(2), 236–243 (1984).CrossRef Griffin D. W., Jae Lim J.: Signal estimation from modified short-time Fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(2), 236–243 (1984).CrossRef
22.
go back to reference Gröchenig K.: Foundations of time-frequency analysis. Applied and Numerical Harmonic Analysis. Birkhäuser Boston, Inc., Boston, MA, 2001.CrossRef Gröchenig K.: Foundations of time-frequency analysis. Applied and Numerical Harmonic Analysis. Birkhäuser Boston, Inc., Boston, MA, 2001.CrossRef
24.
go back to reference Gröchenig K., Leinert M.: Wiener’s lemma for twisted convolution and Gabor frames. Jour. Amer. Math. Soc., 17, 1–18 (2004).MathSciNetCrossRef Gröchenig K., Leinert M.: Wiener’s lemma for twisted convolution and Gabor frames. Jour. Amer. Math. Soc., 17, 1–18 (2004).MathSciNetCrossRef
25.
go back to reference Grohs P., Liehr L.: Stable Gabor phase retrieval and spectral clustering. Comm. Pure Appl. Math., 72(5), 981–1043 (2019).MathSciNetCrossRef Grohs P., Liehr L.: Stable Gabor phase retrieval and spectral clustering. Comm. Pure Appl. Math., 72(5), 981–1043 (2019).MathSciNetCrossRef
26.
go back to reference Grohs P., Liehr L.: Stable Gabor phase retrieval in Gaussian shift-invariant spaces via biorthogonality. Constr. Approx. (2023). Grohs P., Liehr L.: Stable Gabor phase retrieval in Gaussian shift-invariant spaces via biorthogonality. Constr. Approx. (2023).
27.
go back to reference Grohs P., Liehr L.: On foundational discretization barriers in STFT phase retrieval. J. Fourier Anal. Appl., 28(2), 39 (2022).MathSciNetCrossRef Grohs P., Liehr L.: On foundational discretization barriers in STFT phase retrieval. J. Fourier Anal. Appl., 28(2), 39 (2022).MathSciNetCrossRef
28.
go back to reference Grohs P., Liehr L., Rathmair M.: Phase retrieval in Fock space and perturbation of Liouville sets (2023). Grohs P., Liehr L., Rathmair M.: Phase retrieval in Fock space and perturbation of Liouville sets (2023).
29.
go back to reference Grohs P., Rathmair M.: Stable Gabor phase retrieval for multivariate functions. J. Eur. Math. Soc. (JEMS), 24(5), 1593–1615 (2022).MathSciNetCrossRef Grohs P., Rathmair M.: Stable Gabor phase retrieval for multivariate functions. J. Eur. Math. Soc. (JEMS), 24(5), 1593–1615 (2022).MathSciNetCrossRef
30.
go back to reference Jaganathan K., Eldar Y. C., Hassibi B.: STFT phase retrieval: Uniqueness guarantees and recovery algorithms. IEEE Journal of Selected Topics in Signal Processing, 10(4), 770–781 (2016).CrossRef Jaganathan K., Eldar Y. C., Hassibi B.: STFT phase retrieval: Uniqueness guarantees and recovery algorithms. IEEE Journal of Selected Topics in Signal Processing, 10(4), 770–781 (2016).CrossRef
31.
go back to reference Jaming Ph.: A qualitative uncertainty principle and phase retrieval for the Wigner distribution. C. R. Acad. Sci., Paris, Sér. I, Math., 327(3), 249–254 (1998). Jaming Ph.: A qualitative uncertainty principle and phase retrieval for the Wigner distribution. C. R. Acad. Sci., Paris, Sér. I, Math., 327(3), 249–254 (1998).
32.
go back to reference Janssen A. E. J. M.: Some Weyl-Heisenberg frame bound calculations. Indag. Math. (N.S.), 7(2), 165–183 (1996). Janssen A. E. J. M.: Some Weyl-Heisenberg frame bound calculations. Indag. Math. (N.S.), 7(2), 165–183 (1996).
33.
go back to reference Lyubarskiĭ Yu. I.: Frames in the Bargmann space of entire functions. In: Entire and subharmonic functions, volume 11 of Advances in Soviet Mathematics. American Mathematical Society, Providence, RI, pp 167–180 (1992). Lyubarskiĭ Yu. I.: Frames in the Bargmann space of entire functions. In: Entire and subharmonic functions, volume 11 of Advances in Soviet Mathematics. American Mathematical Society, Providence, RI, pp 167–180 (1992).
34.
go back to reference Pauli W.: Die allgemeinen Prinzipien der Wellenmechanik, pages 83–272. Springer Berlin (1933). Pauli W.: Die allgemeinen Prinzipien der Wellenmechanik, pages 83–272. Springer Berlin (1933).
35.
go back to reference Pruša Z., Balazs P., Søndergaard P. L.:. A noniterative method for reconstruction of phase from stft magnitude. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(5), 1154–1164 (2017).CrossRef Pruša Z., Balazs P., Søndergaard P. L.:. A noniterative method for reconstruction of phase from stft magnitude. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(5), 1154–1164 (2017).CrossRef
36.
go back to reference Seip K.: Density theorems for sampling and interpolation in the Bargmann-Fock space. I. J. Reine Angew. Math., 429, 91–106 (1992).MathSciNet Seip K.: Density theorems for sampling and interpolation in the Bargmann-Fock space. I. J. Reine Angew. Math., 429, 91–106 (1992).MathSciNet
37.
go back to reference Seip K., Wallstén R.: Density theorems for sampling and interpolation in the Bargmann-Fock space. II. J. Reine Angew. Math., 429, 107–113 (1992).MathSciNet Seip K., Wallstén R.: Density theorems for sampling and interpolation in the Bargmann-Fock space. II. J. Reine Angew. Math., 429, 107–113 (1992).MathSciNet
38.
go back to reference Shechtman Y., Eldar Y. C., Cohen O., Chapman H.N., Miao J., Segev M.: Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal Processing Magazine, 32(3), 87–109, (2015).CrossRef Shechtman Y., Eldar Y. C., Cohen O., Chapman H.N., Miao J., Segev M.: Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal Processing Magazine, 32(3), 87–109, (2015).CrossRef
40.
go back to reference Waldspurger I., d’Aspremont A., Mallat S.: Phase recovery, MaxCut and complex semidefinite programming. Math. Program., 149(1-2, Ser. A), 47–81 (2015). Waldspurger I., d’Aspremont A., Mallat S.: Phase recovery, MaxCut and complex semidefinite programming. Math. Program., 149(1-2, Ser. A), 47–81 (2015).
Metadata
Title
Gabor Phase Retrieval via Semidefinite Programming
Authors
Philippe Jaming
Martin Rathmair
Publication date
07-11-2024
Publisher
Springer US
Published in
Foundations of Computational Mathematics
Print ISSN: 1615-3375
Electronic ISSN: 1615-3383
DOI
https://doi.org/10.1007/s10208-024-09683-6

Premium Partner