scholarly journals The Regularized Weak Functional Matching Pursuit for linear inverse problems

2019 ◽  
Vol 27 (3) ◽  
pp. 317-340 ◽  
Author(s):  
Max Kontak ◽  
Volker Michel

Abstract In this work, we present the so-called Regularized Weak Functional Matching Pursuit (RWFMP) algorithm, which is a weak greedy algorithm for linear ill-posed inverse problems. In comparison to the Regularized Functional Matching Pursuit (RFMP), on which it is based, the RWFMP possesses an improved theoretical analysis including the guaranteed existence of the iterates, the convergence of the algorithm for inverse problems in infinite-dimensional Hilbert spaces, and a convergence rate, which is also valid for the particular case of the RFMP. Another improvement is the cancellation of the previously required and difficult to verify semi-frame condition. Furthermore, we provide an a-priori parameter choice rule for the RWFMP, which yields a convergent regularization. Finally, we will give a numerical example, which shows that the “weak” approach is also beneficial from the computational point of view. By applying an improved search strategy in the algorithm, which is motivated by the weak approach, we can save up to 90  of computation time in comparison to the RFMP, whereas the accuracy of the solution does not change as much.

Geophysics ◽  
1994 ◽  
Vol 59 (5) ◽  
pp. 818-829 ◽  
Author(s):  
John C. VanDecar ◽  
Roel Snieder

It is not uncommon now for geophysical inverse problems to be parameterized by [Formula: see text] to [Formula: see text] unknowns associated with upwards of [Formula: see text] to [Formula: see text] data constraints. The matrix problem defining the linearization of such a system (e.g., [Formula: see text]m = b) is usually solved with a least‐squares criterion [Formula: see text]. The size of the matrix, however, discourages the direct solution of the system and researchers often turn to iterative techniques such as the method of conjugate gradients to obtain an estimate of the least‐squares solution. These iterative methods take advantage of the sparseness of [Formula: see text], which often has as few as 2–3 percent of its elements nonzero, and do not require the calculation (or storage) of the matrix [Formula: see text]. Although there are usually many more data constraints than unknowns, these problems are, in general, underdetermined and therefore require some sort of regularization to obtain a solution. When the regularization is simple damping, the conjugate gradients method tends to converge in relatively few iterations. However, when derivative‐type regularization is applied (first derivative constraints to obtain the flattest model that fits the data; second derivative to obtain the smoothest), the convergence of parts of the solution may be drastically inhibited. In a series of 1-D examples and a synthetic 2-D crosshole tomography example, we demonstrate this problem and also suggest a method of accelerating the convergence through the preconditioning of the conjugate gradient search directions. We derive a 1-D preconditioning operator for the case of first derivative regularization using a WKBJ approximation. We have found that preconditioning can reduce the number of iterations necessary to obtain satisfactory convergence by up to an order of magnitude. The conclusions we present are also relevant to Bayesian inversion, where a smoothness constraint is imposed through an a priori covariance of the model.


2020 ◽  
Vol 66 (11) ◽  
pp. 7180-7195
Author(s):  
Rasika Rajapakshage ◽  
Marianna Pensky

2012 ◽  
Vol 86 (1) ◽  
pp. 50-63 ◽  
Author(s):  
ALICE C. NIEMEYER ◽  
TOMASZ POPIEL ◽  
CHERYL E. PRAEGER

AbstractLet G be a finite d-dimensional classical group and p a prime divisor of ∣G∣ distinct from the characteristic of the natural representation. We consider a subfamily of p-singular elements in G (elements with order divisible by p) that leave invariant a subspace X of the natural G-module of dimension greater than d/2 and either act irreducibly on X or preserve a particular decomposition of X into two equal-dimensional irreducible subspaces. We proved in a recent paper that the proportion in G of these so-called p-abundant elements is at least an absolute constant multiple of the best currently known lower bound for the proportion of all p-singular elements. From a computational point of view, the p-abundant elements generalise another class of p-singular elements which underpin recognition algorithms for finite classical groups, and it is our hope that p-abundant elements might lead to improved versions of these algorithms. As a step towards this, here we present efficient algorithms to test whether a given element is p-abundant, both for a known prime p and for the case where p is not known a priori.


2021 ◽  
Author(s):  
◽  
Tim Nikolas Jahn

Diese Arbeit beschäftigt sich mit linearen inversen Problemen, wie sie in einer Vielzahl an Anwendungen auftreten. Diese Probleme zeichnen sich dadurch aus, dass sie typischerweise schlecht gestellt sind, was in erster Linie die Stabilität betrifft. Selbst kleinste Messfehler haben enorme Konsequenzen für die Rekonstruktion der zu bestimmenden Größe. Um eine robuste Rekonstruktion zu ermöglichen, muss das Problem regularisiert, dass heißt durch eine ganze Familie abgeänderter, stabiler Approximationen ersetzt werden. Die konkrete Wahl aus der Familie, die sogenannte Parameterwahlstrategie, stützt sich dann auf zusätzliche ad hoc Annahmen über den Messfehler. Typischerweise ist dies im deterministischen Fall die Kenntnis einer oberen Schranke an die Norm des Datenfehlers, oder im stochastischen Fall, die Kenntnis der Verteilung des Fehlers, beziehungsweise die Einschränkung auf eine bestimmte Klasse von Verteilungen, zumeist Gaußsche. In der vorliegenden Arbeit wird untersucht, wie sich diese Informationen unter der Annahme der Wiederholbarkeit der Messung gewinnen lassen. Die Daten werden dabei aus mehreren Messungen gemittelt, welche einer beliebigen, unbekannten Verteilung folgen, wobei die zur Lösung des Problems unweigerlich notwendige Fehlerschranke geschätzt wird. Auf Mittelwert und Schätzer wird dann ein klassisches Regularisierungsverfahren angewandt. Als Regularisierungen werden größtenteils Filter-basierte Verfahren behandelt, die sich auf die Spektralzerlegung des Problems stützen. Als Parameterwahlstrategien werden sowohl einfache a priori-Wahlen betrachtet, als auch das Diskrepanzprinzip als adaptives Verfahren. Es wird Konvergenz für unbekannte beliebige Fehlerverteilungen mit endlicher Varianz sowie für Weißes Rauschen (bezüglich allgemeiner Diskretisierungen) nachgewiesen. Schließlich wird noch die Konvergenz des Diskrepanzprinzips für ein stochastisches Gradientenverfahren gezeigt, als erste rigorose Analyse einer adaptiven Stoppregel für ein solches nicht Filter-basiertes Regularisierungsverfahren.


2014 ◽  
Vol 7 (4) ◽  
pp. 5623-5659 ◽  
Author(s):  
J. Ray ◽  
J. Lee ◽  
V. Yadav ◽  
S. Lefantzi ◽  
A. M. Michalak ◽  
...  

Abstract. We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP to impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.


Sign in / Sign up

Export Citation Format

Share Document