Descent gradient methods for nonsmooth minimization problems in ill-posed problems

作者:

Highlights:

摘要

Descent gradient methods are the most frequently used algorithms for computing regularizers of inverse problems. They are either directly applied to the discrepancy term, which measures the difference between operator evaluation and data or to a regularized version incorporating suitable penalty terms. In its basic form, gradient descent methods converge slowly.We aim at extending different optimization schemes, which have been recently introduced for accelerating these approaches, by addressing more general penalty terms. In particular we use a general setting in infinite Hilbert spaces and examine accelerated algorithms for regularization methods using total variation or sparsity constraints.To illustrate the efficiency of these algorithms, we apply them to a parameter identification problem in an elliptic partial differential equation using total variation regularization.

论文关键词:Nonlinear inverse problems,Total variation regularization,Sparsity regularization,Nonnegative sparse regularization,Descent gradient method,Nesterov’s accelerated algorithm

论文评审过程:Received 18 April 2015, Revised 15 September 2015, Available online 14 December 2015, Version of Record 24 December 2015.

论文官网地址:https://doi.org/10.1016/j.cam.2015.11.039