1 Introduction
1.1 The Bargmann Transform and its Zeros
1.2 Computation of Zero Sets
1.2.1 Thresholding
1.2.2 Extrapolation
1.2.3 Minimal Grid Neighbors
1.3 Contribution
2 Main Result
2.1 The Adaptive Minimal Neighbors Algorithm
2.2 Performance Guarantees for AMN
2.2.1 Input Model
2.2.2 Discussion of the Model
2.2.3 Performance Analysis
-
The AMN algorithm does not require knowledge of the noise level \(\sigma \) and is homogeneous in the sense that F and cF, with \(c \in \mathbb {C} \setminus \{0\}\), produce the same output.
-
Within the estimated success probability, the computation is accurate up to a factor of the grid spacing.
-
The analysis concerns an arbitrary deterministic signal impacted by noise and is uniform over the class (2.11). As usual in such smoothed analysis, the success probability grows as the signal to noise ratio \({\mathrm {A}}/\sigma \) decreases, because randomness helps preclude the very untypical features that could cause the algorithm to fail [25]. In fact, in the noiseless limit \(\sigma =0\), the algorithm could completely fail, since \(F^1\) can be freely prescribed on any finite subset of the plane [24]. For example, irrespectively of its values on the acquisition grid, the deterministic function \(F^1\) could have a cluster of zeros of small diameter that would trigger a single positive minimality test. The proof of Theorem 2.2 shows that such examples are fragile, as the addition of even a moderate amount of noise regularizes the geometry of the zero set.
-
Up to a small boundary effect, the guarantees in Theorem 2.2 comprise an estimate on the Wasserstein distance between the atomic measures supported on \(\{F=0\} \cap \Omega _L\) and on the computed set \(\mathrm {Z}\). More precisely, for a tolerance \(\theta >0\) let us define the boundary-corrected Wasserstein pseudo-distance between two sets \(U,V \subseteq \mathbb {C}\) aswhere the infimum is taken over all injective maps \(\Phi :U \rightarrow V\) such that \(V \cap \Omega _{L-\theta }\subseteq \Phi (U)\). (The definition is not symmetric in U and V, but this is not important for our purpose.) Then, Theorem 2.2 reads$$\begin{aligned} \mathrm {W}_{L,\theta }(U,V) = \inf _\Phi \max _{z \in U} | \Phi (z)-z|_\infty , \end{aligned}$$$$\begin{aligned} \mathbb {P} \Big [ \mathrm {W}_{L,2\delta }\left( \{F=0\} \cap \Omega _L,\mathrm {Z}\right) > 2\delta \Big ] \le C L^2 \exp \bigg (\frac{{\mathrm {A}}^2}{8\sigma ^2}\bigg ) \max \left\{ 1,\log ^2(1/\delta )\right\} \delta ^{4}. \end{aligned}$$
-
The presented analysis concerns a signal contaminated with complex white noise. This is a mathematical simplification; we believe that with more technical arguments a similar result can be derived for real white noise. The case of colored noise seems more challenging and will be the object of future work.
2.3 Numerical Experiments
2.4 Organization
3 Preliminaries
3.1 Notation
3.2 Bargmann–Fock Shifts and Stationarity of Amplitudes
3.3 Minimum Principle for Amplitudes
3.4 Linearization
3.5 Almost Multiple Zeros
3.6 First Intensity of Zeros
4 Proof of Theorem 2.2
4.1 Preparations
4.2 Excluding bad Events
4.3 The True Zeros are Adequately Separated
4.4 Linearization Holds with Estimated Slopes
4.5 After the Selection Step, Each True Zero is Close to a Computed Zero
4.6 After the Selection Step, Each Computed Zero is Close to a True Zero
4.7 Definition of the Map \(\Phi \)
4.8 Verification of the Properties of \(\Phi \)
5 Numerical Experiments
5.1 Simulation
5.2 Specifications for the Experiments
5.2.1 Implementation of the Sieving Step in AMN
5.2.2 Specification of the Compared Algorithms
-
AMN: the AMN algorithm run with domain length \(L-1\) and with sieving step S1 implemented as described in Sect. 5.2.1,
-
MGN: outputs the set of all grid points \(\lambda \in \Lambda _{L-1}\) such that$$\begin{aligned} e^{-\frac{1}{2}|\lambda |^{2}}|F(\lambda )| \le e^{-\frac{1}{2}|\mu |^{2}}|F(\mu )|, \quad |\lambda -\mu |_{\infty }=\delta . \end{aligned}$$(5.6)
-
ST: outputs the set of grid points \(\lambda \in \Lambda _{L-1}\) obtained as the result of applying the sieving algorithm S1 to$$\begin{aligned} \left\{ \lambda \in \Lambda _{L-1}: e^{-\frac{1}{2}|\lambda |^2} |F(\lambda )| \le 2 \delta \right\} . \end{aligned}$$
5.2.3 Varying the Grid Resolution
5.3 Faithfulness of Simulation of Zero Sets
5.3.1 No Deterministic Signal
\(\delta \) | AMN | MGN | ST |
---|---|---|---|
\( 2^{-4} \) | \( -0.00120 \pm 0.01171 \) | \( -0.00048 \pm 0.01150 \) | \( +0.01868 \pm 0.02858 \) |
\( 2^{-5} \) | \( -0.00062 \pm 0.01164 \) | \( -0.00057 \pm 0.01162 \) | \( +0.02189 \pm 0.04047 \) |
\( 2^{-6} \) | \( -0.00065 \pm 0.01156 \) | \( -0.00064 \pm 0.01155 \) | \( +0.02280 \pm 0.05391 \) |
\( 2^{-7} \) | \( -0.00068 \pm 0.01153 \) | \( -0.00068 \pm 0.01153 \) | \( +0.02354 \pm 0.06774 \) |
\( 2^{-8} \) | \( -0.00062 \pm 0.01155 \) | \( -0.00062 \pm 0.01155 \) | \( +0.02424 \pm 0.07429 \) |
\( 2^{-9} \) | \( -0.00067 \pm 0.01158 \) | \( -0.00067 \pm 0.01158 \) | \( +0.02390 \pm 0.07237 \) |
5.3.2 Deterministic Signal Plus Noise
\(f^1\) | \(F^1\) |
---|---|
\(f^1(t) = \big (\tfrac{2}{\pi })^{\frac{1}{4}}\,e^{- t^2}\) | \(F^1(\zeta )=1\) |
\(f^1(t) =\big (\tfrac{2}{\pi })^{\frac{1}{4}}\,2te^{- t^2}\) | \(F^1(\zeta )=\zeta \) |
5.4 Failure Probabilities and Consistency as Resolution Decreases
\(f^1=0\) | \(f^1=\exp (-t^2)\) | \(f^1= t \exp (-t^2)\) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\({\mathrm {A}}=1\) | \({\mathrm {A}}=100\) | \({\mathrm {A}}=1\) | \({\mathrm {A}}=100\) | ||||||||||||
\(\delta \) | AMN | MGN | ST | AMN | MGN | ST | AMN | MGN | ST | AMN | MGN | ST | AMN | MGN | ST |
\( 2^{-4} \) | 0.082 | 0.001 | 0.665 | 0.07 | 0.00 | 0.67 | 0.13 | 0.00 | 0.87 | 0.07 | 0.00 | 0.64 | 0.18 | 0.00 | 1.00 |
\( 2^{-5} \) | 0.007 | 0.000 | 0.536 | 0.00 | 0.00 | 0.50 | 0.00 | 0.00 | 0.75 | 0.00 | 0.00 | 0.52 | 0.01 | 0.00 | 1.00 |
\( 2^{-6} \) | 0.001 | 0.000 | 0.419 | 0.00 | 0.00 | 0.41 | 0.00 | 0.00 | 0.70 | 0.00 | 0.00 | 0.42 | 0.00 | 0.00 | 1.00 |
\( 2^{-7} \) | 0.000 | 0.000 | 0.389 | 0.00 | 0.00 | 0.32 | 0.00 | 0.00 | 0.72 | 0.00 | 0.00 | 0.34 | 0.00 | 0.00 | 1.00 |
\( 2^{-8} \) | 0.000 | 0.000 | 0.369 | 0.00 | 0.00 | 0.29 | 0.00 | 0.00 | 0.65 | 0.00 | 0.00 | 0.33 | 0.00 | 0.00 | 1.00 |
\( 2^{-9} \) | 0.000 | 0.000 | 0.359 | 0.00 | 0.00 | 0.31 | 0.00 | 0.00 | 0.71 | 0.00 | 0.00 | 0.32 | 0.00 | 0.00 | 1.00 |