Solving Systems of Linear Equations (2)
Minimum Norm Solution of a System of Linear Equations
In the previous blog post, we discussed one scenario of a system of linear equations, where the number of unknowns is less than the number of equations, and introduced the least squares method. In this post, we will cover another scenario, where the number of equations is less than the number of unknowns. In this case, the system has infinitely many solutions, but only one solution is closest to the origin, which is the solution with the smallest norm, also known as the minimum norm solution of the system of linear equations.
Consider the system of linear equations \(Ax=b\), where . We aim to find its minimum norm solution, which is equivalent to solving the following optimization problem:
This problem falls under equality-constrained optimization problems and can be solved using the method of Lagrange multipliers. However, here we introduce an alternative approach.
The conclusion is that the minimum norm solution of this system is . Below is the proof:
Let substitute into the rightmost term of the above equation to get . Thus, . Since is a constant, and when , always holds, we can conclude that is the unique minimum solution to this optimization problem, i.e., the minimum norm solution.
Kaczmarz Algorithm
The Kaczmarz algorithm is an iterative method for solving the minimum norm solution, which avoids the step of computing , making the computation more efficient. Assuming , the iterative formula is given directly as follows: During the first iterations, i.e., , we have:
For the -th iteration, reuse the first row of and the first element of , i.e.,
Every iterations, the process loops back to the beginning. The parameter can be considered as part of the iterative algorithm’s process. It can be proven that in the Kaczmarz algorithm, if , then as , . The detailed proof is omitted here.
Future articles will introduce general solutions for systems of linear equations, pseudoinverse, and related topics. To be continued…