Solving Least-Squares Problems via a Double-Optimal Algorithm and a Variant of the Karush–Kuhn–Tucker Equation for Over-Determined Systems

A double optimal solution (DOS) of a least-squares problem Ax=b,A∈Rq×n with q≠n is derived in an m+1-dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the m+1 expansion coefficients of the solution x in the VAKS. The minimal-norm solution can be obtaine...

Full description

Saved in:
Bibliographic Details
Published in:Algorithms Vol. 17; no. 5; p. 211
Main Authors: Liu, Chein-Shan, Kuo, Chung-Lun, Chang, Chih-Wen
Format: Journal Article
Language:English
Published: Basel MDPI AG 01.05.2024
Subjects:
ISSN:1999-4893, 1999-4893
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A double optimal solution (DOS) of a least-squares problem Ax=b,A∈Rq×n with q≠n is derived in an m+1-dimensional varying affine Krylov subspace (VAKS); two minimization techniques exactly determine the m+1 expansion coefficients of the solution x in the VAKS. The minimal-norm solution can be obtained automatically regardless of whether the linear system is consistent or inconsistent. A new double optimal algorithm (DOA) is created; it is sufficiently time saving by inverting an m×m positive definite matrix at each iteration step, where m≪min(n,q). The properties of the DOA are investigated and the estimation of residual error is provided. The residual norms are proven to be strictly decreasing in the iterations; hence, the DOA is absolutely convergent. Numerical tests reveal the efficiency of the DOA for solving least-squares problems. The DOA is applicable to least-squares problems regardless of whether q<n or q>n. The Moore–Penrose inverse matrix is also addressed by adopting the DOA; the accuracy and efficiency of the proposed method are proven. The m+1-dimensional VAKS is different from the traditional m-dimensional affine Krylov subspace used in the conjugate gradient (CG)-type iterative algorithms CGNR (or CGLS) and CGRE (or Craig method) for solving least-squares problems with q>n. We propose a variant of the Karush–Kuhn–Tucker equation, and then we apply the partial pivoting Gaussian elimination method to solve the variant, which is better than the original Karush–Kuhn–Tucker equation, the CGNR and the CGNE for solving over-determined linear systems. Our main contribution is developing a double-optimization-based iterative algorithm in a varying affine Krylov subspace for effectively and accurately solving least-squares problems, even for a dense and ill-conditioned matrix A with q≪n or q≫n.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1999-4893
1999-4893
DOI:10.3390/a17050211