Abstract
We consider a parameter choice method (called the minimum bound method) for regularization of linear ill-posed problems that was developed by Raus and Gfrerer for the case with continuous, deterministic data. The method is adapted and analysed in a discrete, stochastic framework. It is shown that asymptotically, as the number of data points approaches infinity, the method (with a constant set to 2) behaves like an unbiased error method, which selects the parameter by minimizing a certain unbiased estimate of the expected squared error in the regularized solution. The method is also shown to be weakly asymptotically optimal, in that the `expected' estimate achieves the optimal rate of convergence with repect to the expected squared error criterion and it has the optimal rate of decay.
Export citation and abstract BibTeX RIS