TY - JOUR
T1 - 2 tests for the choice of the regularization parameter in nonlinear inverse problems
AU - Mead, J. L.
AU - Hammerquist, C. C.
PY - 2013
Y1 - 2013
N2 - We address discrete nonlinear inverse problems with weighted least squares and Tikhonov regularization. Regularization is a way to add more information to the problem when it is ill-posed or ill-conditioned. However, it is still an open question as to how to weight this information. The discrepancy principle considers the residual norm to determine the regularization weight or parameter, while the ?2 method [J. Mead, J. Inverse Ill-Posed Probl., 16 (2008), pp. 175- 194; J. Mead and R. A. Renaut, Inverse Problems, 25 (2009), 025002; J. Mead, Appl. Math. Comput., 219 (2013), pp. 5210-5223; R. A. Renaut, I. Hnetynkova, and J. L. Mead, Comput. Statist. Data Anal., 54 (2010), pp. 3430-3445] uses the regularized residual. Using the regularized residual has the benefit of giving a clear ?2 test with a fixed noise level when the number of parameters is equal to or greater than the number of data. Previous work with the ?2 method has been for linear problems, and here we extend it to nonlinear problems. In particular, we determine the appropriate ?2 tests for Gauss-Newton and Levenberg-Marquardt algorithms, and these tests are used to find a egularization parameter or weights on initial parameter estimate errors. This algorithm is applied to a two- imensional cross-well tomography problem and a one-dimensional electromagnetic problem from [R. C. Aster, B. Borchers, and C. Thurber, Parameter Estimation and Inverse Problems, Academic Press, New York, 2005].
AB - We address discrete nonlinear inverse problems with weighted least squares and Tikhonov regularization. Regularization is a way to add more information to the problem when it is ill-posed or ill-conditioned. However, it is still an open question as to how to weight this information. The discrepancy principle considers the residual norm to determine the regularization weight or parameter, while the ?2 method [J. Mead, J. Inverse Ill-Posed Probl., 16 (2008), pp. 175- 194; J. Mead and R. A. Renaut, Inverse Problems, 25 (2009), 025002; J. Mead, Appl. Math. Comput., 219 (2013), pp. 5210-5223; R. A. Renaut, I. Hnetynkova, and J. L. Mead, Comput. Statist. Data Anal., 54 (2010), pp. 3430-3445] uses the regularized residual. Using the regularized residual has the benefit of giving a clear ?2 test with a fixed noise level when the number of parameters is equal to or greater than the number of data. Previous work with the ?2 method has been for linear problems, and here we extend it to nonlinear problems. In particular, we determine the appropriate ?2 tests for Gauss-Newton and Levenberg-Marquardt algorithms, and these tests are used to find a egularization parameter or weights on initial parameter estimate errors. This algorithm is applied to a two- imensional cross-well tomography problem and a one-dimensional electromagnetic problem from [R. C. Aster, B. Borchers, and C. Thurber, Parameter Estimation and Inverse Problems, Academic Press, New York, 2005].
KW - Covariance
KW - Least squares
KW - Nonlinear
KW - Regularization
UR - http://www.scopus.com/inward/record.url?scp=84887373652&partnerID=8YFLogxK
U2 - 10.1137/12088447X
DO - 10.1137/12088447X
M3 - Article
AN - SCOPUS:84887373652
SN - 0895-4798
VL - 34
SP - 1213
EP - 1230
JO - SIAM Journal on Matrix Analysis and Applications
JF - SIAM Journal on Matrix Analysis and Applications
IS - 3
ER -