Misplaced Pages

Moore–Penrose inverse: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 14:42, 9 May 2021 editSamuel Bel (talk | contribs)2 editsm Vectors: I added little arrows← Previous edit Revision as of 17:00, 10 May 2021 edit undoSbb (talk | contribs)Extended confirmed users, IP block exemptions9,536 edits replaced colon-indented <math> blocks with <math display="block"> (MOS:INDENT)Tag: 2017 wikitext editorNext edit →
Line 29: Line 29:


In particular, when {{tmath| A }} has linearly independent columns (and thus matrix {{tmath| A^* A }} is invertible), {{tmath| A^+ }} can be computed as In particular, when {{tmath| A }} has linearly independent columns (and thus matrix {{tmath| A^* A }} is invertible), {{tmath| A^+ }} can be computed as
: <math> A^+ = \left(A^* A\right)^{-1} A^*.</math> <math display="block"> A^+ = \left(A^* A\right)^{-1} A^*.</math>


This particular pseudoinverse constitutes a ''left inverse'', since, in this case, <math> A^+A = I </math>. This particular pseudoinverse constitutes a ''left inverse'', since, in this case, <math> A^+A = I </math>.


When {{mvar| A }} has linearly independent rows (matrix {{tmath| A A^* }} is invertible), {{tmath| A^+ }} can be computed as When {{mvar| A }} has linearly independent rows (matrix {{tmath| A A^* }} is invertible), {{tmath| A^+ }} can be computed as
: <math> A^+ = A^* \left(A A^*\right)^{-1}.</math> <math display="block"> A^+ = A^* \left(A A^*\right)^{-1}.</math>


This is a ''right inverse'', as <math> A A^+ = I</math>. This is a ''right inverse'', as <math> A A^+ = I</math>.
Line 58: Line 58:
====Identities==== ====Identities====
The following identities can be used to cancel certain subexpressions or expand expressions involving pseudoinverses. Proofs for these properties can be found in the ]. The following identities can be used to cancel certain subexpressions or expand expressions involving pseudoinverses. Proofs for these properties can be found in the ].
: <math>\begin{alignat}{3} <math display="block">\begin{alignat}{3}
A^+ ={}& A^+ && A^{+*} && A^* \\ A^+ ={}& A^+ && A^{+*} && A^* \\
={}& A^* && A^{+*} && A^+, \\ ={}& A^* && A^{+*} && A^+, \\
Line 69: Line 69:
===Reduction to Hermitian case=== ===Reduction to Hermitian case===
The computation of the pseudoinverse is reducible to its construction in the Hermitian case. This is possible through the equivalences: The computation of the pseudoinverse is reducible to its construction in the Hermitian case. This is possible through the equivalences:
: <math>A^+ = \left(A^*A\right)^+ A^*,</math> <math display="block">A^+ = \left(A^*A\right)^+ A^*,</math>
: <math>A^+ = A^* \left(A A^*\right)^+,</math> <math display="block">A^+ = A^* \left(A A^*\right)^+,</math>


as {{tmath| A^*A }} and {{tmath| A A^* }} are Hermitian. as {{tmath| A^*A }} and {{tmath| A A^* }} are Hermitian.
Line 103: Line 103:


The last sufficient condition yields the equalities The last sufficient condition yields the equalities
: <math>\begin{align} <math display="block">\begin{align}
\left(A A^*\right)^+ &= A^{+*} A^+, \\ \left(A A^*\right)^+ &= A^{+*} A^+, \\
\left(A^* A\right)^+ &= A^+ A^{+*}. \left(A^* A\right)^+ &= A^+ A^{+*}.
Line 111: Line 111:
See the counterexample: See the counterexample:


: <math>\Biggl( \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix} \Biggr)^+ = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}^+ = \begin{pmatrix} <math display="block">\Biggl( \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix} \Biggr)^+ = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}^+ = \begin{pmatrix}
\tfrac12 & 0 \\ \tfrac12 & 0 \end{pmatrix} \quad \neq \quad \begin{pmatrix} \tfrac12 & 0 \\ \tfrac12 & 0 \end{pmatrix} \quad \neq \quad \begin{pmatrix}
\tfrac14 & 0 \\ \tfrac14 & 0 \end{pmatrix} = \begin{pmatrix} 0 & \tfrac12 \\ 0 & \tfrac12 \end{pmatrix} \begin{pmatrix} \tfrac12 & 0 \\ \tfrac12 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix}^+ \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}^+ </math> \tfrac14 & 0 \\ \tfrac14 & 0 \end{pmatrix} = \begin{pmatrix} 0 & \tfrac12 \\ 0 & \tfrac12 \end{pmatrix} \begin{pmatrix} \tfrac12 & 0 \\ \tfrac12 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 1 & 1 \end{pmatrix}^+ \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}^+ </math>
Line 128: Line 128:


Another property is the following: if {{tmath| A \in \mathbb{k}^{n\times n} }} is Hermitian and idempotent (true if and only if it represents an orthogonal projection), then, for any matrix {{tmath| B\in \mathbb{k}^{m\times n} }} the following equation holds:<ref>{{cite journal|first1=Anthony A.|last1=Maciejewski|first2=Charles A.|last2=Klein|title=Obstacle Avoidance for Kinematically Redundant Manipulators in Dynamically Varying Environments|journal=International Journal of Robotics Research|volume=4|issue=3|pages=109–117|year=1985|doi=10.1177/027836498500400308|hdl=10217/536|s2cid=17660144|hdl-access=free}}</ref> Another property is the following: if {{tmath| A \in \mathbb{k}^{n\times n} }} is Hermitian and idempotent (true if and only if it represents an orthogonal projection), then, for any matrix {{tmath| B\in \mathbb{k}^{m\times n} }} the following equation holds:<ref>{{cite journal|first1=Anthony A.|last1=Maciejewski|first2=Charles A.|last2=Klein|title=Obstacle Avoidance for Kinematically Redundant Manipulators in Dynamically Varying Environments|journal=International Journal of Robotics Research|volume=4|issue=3|pages=109–117|year=1985|doi=10.1177/027836498500400308|hdl=10217/536|s2cid=17660144|hdl-access=free}}</ref>
: <math> A(BA)^+ = (BA)^+</math> <math display="block"> A(BA)^+ = (BA)^+</math>


This can be proven by defining matrices <math>C = BA</math>, <math>D = A(BA)^+</math>, and checking that {{tmath| D }} is indeed a pseudoinverse for {{tmath| C }} by verifying that the defining properties of the pseudoinverse hold, when {{tmath| A }} is Hermitian and idempotent. This can be proven by defining matrices <math>C = BA</math>, <math>D = A(BA)^+</math>, and checking that {{tmath| D }} is indeed a pseudoinverse for {{tmath| C }} by verifying that the defining properties of the pseudoinverse hold, when {{tmath| A }} is Hermitian and idempotent.


From the last property it follows that, if {{tmath| A \in \mathbb{k}^{n\times n} }} is Hermitian and idempotent, for any matrix {{tmath| B \in \mathbb{k}^{n\times m} }} From the last property it follows that, if {{tmath| A \in \mathbb{k}^{n\times n} }} is Hermitian and idempotent, for any matrix {{tmath| B \in \mathbb{k}^{n\times m} }}
:<math>(AB)^+A = (AB)^+</math> <math display="block">(AB)^+A = (AB)^+</math>


Finally, if {{tmath| A }} is an orthogonal projection matrix, then its pseudoinverse trivially coincides with the matrix itself, that is, <math>A^+ = A</math>. Finally, if {{tmath| A }} is an orthogonal projection matrix, then its pseudoinverse trivially coincides with the matrix itself, that is, <math>A^+ = A</math>.
Line 145: Line 145:


===Subspaces=== ===Subspaces===
: <math>\begin{align} <math display="block">\begin{align}
\ker\left(A^+\right) &= \ker\left(A^*\right) \\ \ker\left(A^+\right) &= \ker\left(A^*\right) \\
\operatorname{ran}\left(A^+\right) &= \operatorname{ran}\left(A^*\right) \operatorname{ran}\left(A^+\right) &= \operatorname{ran}\left(A^*\right)
Line 152: Line 152:
===Limit relations=== ===Limit relations===
The pseudoinverse are limits: The pseudoinverse are limits:
: <math>A^+ = \lim_{\delta \searrow 0} \left(A^* A + \delta I\right)^{-1} A^* <math display="block">A^+ = \lim_{\delta \searrow 0} \left(A^* A + \delta I\right)^{-1} A^*
= \lim_{\delta \searrow 0} A^* \left(A A^* + \delta I\right)^{-1} = \lim_{\delta \searrow 0} A^* \left(A A^* + \delta I\right)^{-1}
</math> </math>
: (see ]). These limits exist even if {{tmath| \left(A A^*\right)^{-1} }} or {{tmath| \left(A^*A\right)^{-1} }} do not exist.<ref name="GvL1996"/>{{rp|263}} (see ]). These limits exist even if {{tmath| \left(A A^*\right)^{-1} }} or {{tmath| \left(A^*A\right)^{-1} }} do not exist.<ref name="GvL1996"/>{{rp|263}}


===Continuity=== ===Continuity===
Line 162: Line 162:
===Derivative=== ===Derivative===
The derivative of a real valued pseudoinverse matrix which has constant rank at a point {{tmath| x }} may be calculated in terms of the derivative of the original matrix:<ref>{{cite journal|title=The Differentiation of Pseudo-Inverses and Nonlinear Least Squares Problems Whose Variables Separate|first1=G. H.|last1=Golub|first2=V.|last2=Pereyra|journal=SIAM Journal on Numerical Analysis|volume=10|number=2|date=April 1973|pages=413–32|jstor=2156365|doi=10.1137/0710036|bibcode=1973SJNA...10..413G}}</ref> The derivative of a real valued pseudoinverse matrix which has constant rank at a point {{tmath| x }} may be calculated in terms of the derivative of the original matrix:<ref>{{cite journal|title=The Differentiation of Pseudo-Inverses and Nonlinear Least Squares Problems Whose Variables Separate|first1=G. H.|last1=Golub|first2=V.|last2=Pereyra|journal=SIAM Journal on Numerical Analysis|volume=10|number=2|date=April 1973|pages=413–32|jstor=2156365|doi=10.1137/0710036|bibcode=1973SJNA...10..413G}}</ref>
: <math> <math display="block">
\frac{\mathrm d}{\mathrm d x} A^+(x) = \frac{\mathrm d}{\mathrm d x} A^+(x) =
-A^+ \left( \frac{\mathrm d}{\mathrm d x} A \right) A^+ ~+~ -A^+ \left( \frac{\mathrm d}{\mathrm d x} A \right) A^+ ~+~
Line 190: Line 190:
===Scalars=== ===Scalars===
It is also possible to define a pseudoinverse for scalars and vectors. This amounts to treating these as matrices. The pseudoinverse of a scalar {{tmath| x }} is zero if {{tmath| x }} is zero and the reciprocal of {{tmath| x }} otherwise: It is also possible to define a pseudoinverse for scalars and vectors. This amounts to treating these as matrices. The pseudoinverse of a scalar {{tmath| x }} is zero if {{tmath| x }} is zero and the reciprocal of {{tmath| x }} otherwise:
:<math>x^+ = \begin{cases} <math display="block">x^+ = \begin{cases}
0, & \mbox{if }x = 0; \\ 0, & \mbox{if }x = 0; \\
x^{-1}, & \mbox{otherwise}. x^{-1}, & \mbox{otherwise}.
Line 197: Line 197:
===Vectors=== ===Vectors===
The pseudoinverse of the null (all zero) vector is the transposed null vector. The pseudoinverse of a non-null vector is the conjugate transposed vector divided by its squared magnitude: The pseudoinverse of the null (all zero) vector is the transposed null vector. The pseudoinverse of a non-null vector is the conjugate transposed vector divided by its squared magnitude:
:<math>\vec{x}^+ = \begin{cases} <math display="block">\vec{x}^+ = \begin{cases}
\vec{0}^\textsf{T}, & \mbox{if } \vec{x} = \vec{0}; \\ \vec{0}^\textsf{T}, & \mbox{if } \vec{x} = \vec{0}; \\
\dfrac{\vec{x}^*}{\vec{x}^* \vec{x}}, & \mbox{otherwise}. \dfrac{\vec{x}^*}{\vec{x}^* \vec{x}}, & \mbox{otherwise}.
Line 205: Line 205:
If the '''columns''' of {{tmath| A }} are ] If the '''columns''' of {{tmath| A }} are ]
(so that {{tmath| m \ge n }}), then {{tmath| A^*A }} is invertible. In this case, an explicit formula is:{{sfn|Ben-Israel|Greville|2003}} (so that {{tmath| m \ge n }}), then {{tmath| A^*A }} is invertible. In this case, an explicit formula is:{{sfn|Ben-Israel|Greville|2003}}
:<math>A^+ = \left(A^*A\right)^{-1}A^*</math>. <math display="block">A^+ = \left(A^*A\right)^{-1}A^*</math>.


It follows that {{tmath| A^+ }} is then a left inverse of {{tmath| A }}: &nbsp; <math>A^+ A = I_n</math>. It follows that {{tmath| A^+ }} is then a left inverse of {{tmath| A }}: &nbsp; <math>A^+ A = I_n</math>.
Line 211: Line 211:
===Linearly independent rows=== ===Linearly independent rows===
If the '''rows''' of {{tmath| A }} are linearly independent (so that {{tmath| m \le n }}), then {{tmath| A A^* }} is invertible. In this case, an explicit formula is: If the '''rows''' of {{tmath| A }} are linearly independent (so that {{tmath| m \le n }}), then {{tmath| A A^* }} is invertible. In this case, an explicit formula is:
:<math>A^+ = A^*\left(A A^*\right)^{-1}</math>. <math display="block">A^+ = A^*\left(A A^*\right)^{-1}</math>.


It follows that {{tmath| A^+ }} is a right inverse of {{tmath| A }}: &nbsp; <math>A A^+ = I_m</math>. It follows that {{tmath| A^+ }} is a right inverse of {{tmath| A }}: &nbsp; <math>A A^+ = I_m</math>.
Line 217: Line 217:
===Orthonormal columns or rows=== ===Orthonormal columns or rows===
This is a special case of either full column rank or full row rank (treated above). If {{tmath| A }} has orthonormal columns (<math>A^*A = I_n</math>) or orthonormal rows (<math>A A^* = I_m</math>), then: This is a special case of either full column rank or full row rank (treated above). If {{tmath| A }} has orthonormal columns (<math>A^*A = I_n</math>) or orthonormal rows (<math>A A^* = I_m</math>), then:
:<math>A^+ = A^* .</math> <math display="block">A^+ = A^* .</math>


=== Normal matrices === === Normal matrices ===
Line 224: Line 224:
===Orthogonal projection matrices=== ===Orthogonal projection matrices===
This is a special case of a Normal matrix with eigenvalues 0 and 1. If {{tmath| A }} is an orthogonal projection matrix, that is, <math>A = A^*</math> and <math>A^2 = A</math>, then the pseudoinverse trivially coincides with the matrix itself: This is a special case of a Normal matrix with eigenvalues 0 and 1. If {{tmath| A }} is an orthogonal projection matrix, that is, <math>A = A^*</math> and <math>A^2 = A</math>, then the pseudoinverse trivially coincides with the matrix itself:
:<math>A^+ = A.</math> <math display="block">A^+ = A.</math>


===Circulant matrices=== ===Circulant matrices===
For a ] {{tmath| C }}, the singular value decomposition is given by the ], that is, the singular values are the Fourier coefficients. Let {{tmath| \mathcal{F} }} be the ], then<ref name="Stallings1972">{{cite journal | last1=Stallings | first1=W. T. | author-link=W. T. Stallings | title=The Pseudoinverse of an ''r''-Circulant Matrix | journal=] | volume=34 | issue=2 | pages=385–88 | year=1972 | doi=10.2307/2038377 | last2=Boullion | first2=T. L.| jstor=2038377 }}</ref> For a ] {{tmath| C }}, the singular value decomposition is given by the ], that is, the singular values are the Fourier coefficients. Let {{tmath| \mathcal{F} }} be the ], then<ref name="Stallings1972">{{cite journal | last1=Stallings | first1=W. T. | author-link=W. T. Stallings | title=The Pseudoinverse of an ''r''-Circulant Matrix | journal=] | volume=34 | issue=2 | pages=385–88 | year=1972 | doi=10.2307/2038377 | last2=Boullion | first2=T. L.| jstor=2038377 }}</ref>
:<math>\begin{align} <math display="block">\begin{align}
C &= \mathcal{F}\cdot\Sigma\cdot\mathcal{F}^* \\ C &= \mathcal{F}\cdot\Sigma\cdot\mathcal{F}^* \\
C^+ &= \mathcal{F}\cdot\Sigma^+\cdot\mathcal{F}^* C^+ &= \mathcal{F}\cdot\Sigma^+\cdot\mathcal{F}^*
Line 242: Line 242:


Consider the case when {{tmath| A }} is of full column rank, so that <math>A^+ = \left(A^*A\right)^{-1}A^*</math>. Then the ] <math>A^*A = R^*R</math>, where {{tmath| R }} is an ], may be used. Multiplication by the inverse is then done easily by solving a system with multiple right-hand sides, Consider the case when {{tmath| A }} is of full column rank, so that <math>A^+ = \left(A^*A\right)^{-1}A^*</math>. Then the ] <math>A^*A = R^*R</math>, where {{tmath| R }} is an ], may be used. Multiplication by the inverse is then done easily by solving a system with multiple right-hand sides,
: <math>A^+ = \left(A^*A\right)^{-1}A^* \quad \Leftrightarrow \quad \left(A^*A\right)A^+ = A^* \quad \Leftrightarrow \quad R^*RA^+ = A^* </math> <math display="block">A^+ = \left(A^*A\right)^{-1}A^* \quad \Leftrightarrow \quad \left(A^*A\right)A^+ = A^* \quad \Leftrightarrow \quad R^*RA^+ = A^* </math>


which may be solved by ] followed by ]. which may be solved by ] followed by ].


The Cholesky decomposition may be computed without forming {{tmath| A^*A }} explicitly, by alternatively using the ] of <math> A = Q R</math>, where <math>Q</math> has orthonormal columns, <math> Q^*Q = I </math>, and {{tmath| R }} is upper triangular. Then The Cholesky decomposition may be computed without forming {{tmath| A^*A }} explicitly, by alternatively using the ] of <math> A = Q R</math>, where <math>Q</math> has orthonormal columns, <math> Q^*Q = I </math>, and {{tmath| R }} is upper triangular. Then
: <math> A^*A \,=\, (Q R)^*(Q R) \,=\, R^*Q^*Q R \,=\, R^*R ,</math> <math display="block"> A^*A \,=\, (Q R)^*(Q R) \,=\, R^*Q^*Q R \,=\, R^*R ,</math>


so {{tmath| R }} is the Cholesky factor of {{tmath| A^*A }}. so {{tmath| R }} is the Cholesky factor of {{tmath| A^*A }}.
Line 265: Line 265:
===The iterative method of Ben-Israel and Cohen=== ===The iterative method of Ben-Israel and Cohen===
Another method for computing the pseudoinverse (cf. ]) uses the recursion Another method for computing the pseudoinverse (cf. ]) uses the recursion
:<math> A_{i+1} = 2A_i - A_i A A_i, </math> <math display="block"> A_{i+1} = 2A_i - A_i A A_i, </math>


which is sometimes referred to as hyper-power sequence. This recursion produces a sequence converging quadratically to the pseudoinverse of {{tmath| A }} if it is started with an appropriate {{tmath| A_0 }} satisfying <math>A_0 A = \left(A_0 A\right)^*</math>. The choice <math>A_0 = \alpha A^*</math> (where <math>0 < \alpha < 2/\sigma^2_1(A)</math>, with {{tmath| \sigma_1(A) }} denoting the largest singular value of {{tmath| A }}) <ref>{{cite journal | last1=Ben-Israel | first1=Adi | last2=Cohen | first2=Dan | title=On Iterative Computation of Generalized Inverses and Associated Projections | journal=SIAM Journal on Numerical Analysis | volume=3 | issue=3 | pages=410–19 | year=1966 | jstor=2949637 | doi=10.1137/0703035 | bibcode=1966SJNA....3..410B }}</ref> has been argued not to be competitive to the method using the SVD mentioned above, because even for moderately ill-conditioned matrices it takes a long time before {{tmath| A_i }} enters the region of quadratic convergence.<ref>{{cite journal | last1=Söderström | first1=Torsten | last2=Stewart | first2=G. W. | title=On the Numerical Properties of an Iterative Method for Computing the Moore–Penrose Generalized Inverse | journal=SIAM Journal on Numerical Analysis | volume=11 | issue=1 | pages=61–74 | year=1974 | jstor=2156431 | doi=10.1137/0711008 | bibcode=1974SJNA...11...61S }}</ref> However, if started with {{tmath| A_0 }} already close to the Moore–Penrose inverse and <math>A_0 A = \left(A_0 A\right)^*</math>, for example <math>A_0 := \left(A^* A + \delta I\right)^{-1} A^*</math>, convergence is fast (quadratic). which is sometimes referred to as hyper-power sequence. This recursion produces a sequence converging quadratically to the pseudoinverse of {{tmath| A }} if it is started with an appropriate {{tmath| A_0 }} satisfying <math>A_0 A = \left(A_0 A\right)^*</math>. The choice <math>A_0 = \alpha A^*</math> (where <math>0 < \alpha < 2/\sigma^2_1(A)</math>, with {{tmath| \sigma_1(A) }} denoting the largest singular value of {{tmath| A }}) <ref>{{cite journal | last1=Ben-Israel | first1=Adi | last2=Cohen | first2=Dan | title=On Iterative Computation of Generalized Inverses and Associated Projections | journal=SIAM Journal on Numerical Analysis | volume=3 | issue=3 | pages=410–19 | year=1966 | jstor=2949637 | doi=10.1137/0703035 | bibcode=1966SJNA....3..410B }}</ref> has been argued not to be competitive to the method using the SVD mentioned above, because even for moderately ill-conditioned matrices it takes a long time before {{tmath| A_i }} enters the region of quadratic convergence.<ref>{{cite journal | last1=Söderström | first1=Torsten | last2=Stewart | first2=G. W. | title=On the Numerical Properties of an Iterative Method for Computing the Moore–Penrose Generalized Inverse | journal=SIAM Journal on Numerical Analysis | volume=11 | issue=1 | pages=61–74 | year=1974 | jstor=2156431 | doi=10.1137/0711008 | bibcode=1974SJNA...11...61S }}</ref> However, if started with {{tmath| A_0 }} already close to the Moore–Penrose inverse and <math>A_0 A = \left(A_0 A\right)^*</math>, for example <math>A_0 := \left(A^* A + \delta I\right)^{-1} A^*</math>, convergence is fast (quadratic).
Line 292: Line 292:
The pseudoinverse provides a ] solution to a ].<ref name="Penrose1956">{{cite journal | last=Penrose | first=Roger | title=On best approximate solution of linear matrix equations | journal=] | volume=52 | pages=17–19 | year=1956 | issue=1 | doi=10.1017/S0305004100030929| bibcode=1956PCPS...52...17P }}</ref> The pseudoinverse provides a ] solution to a ].<ref name="Penrose1956">{{cite journal | last=Penrose | first=Roger | title=On best approximate solution of linear matrix equations | journal=] | volume=52 | pages=17–19 | year=1956 | issue=1 | doi=10.1017/S0305004100030929| bibcode=1956PCPS...52...17P }}</ref>
For {{tmath| A \in \mathbb{k}^{m\times n} }}, given a system of linear equations For {{tmath| A \in \mathbb{k}^{m\times n} }}, given a system of linear equations
:<math>A x = b,</math> <math display="block">A x = b,</math>


in general, a vector {{tmath| x }} that solves the system may not exist, or if one does exist, it may not be unique. The pseudoinverse solves the "least-squares" problem as follows: in general, a vector {{tmath| x }} that solves the system may not exist, or if one does exist, it may not be unique. The pseudoinverse solves the "least-squares" problem as follows:
Line 305: Line 305:
If the linear system If the linear system


:<math>A x = b</math> <math display="block">A x = b</math>


has any solutions, they are all given by<ref name=James>{{cite journal|last=James|first=M.|title=The generalised inverse|journal=Mathematical Gazette|volume=62|issue=420|date=June 1978|pages=109–14|doi=10.1017/S0025557200086460}}</ref> has any solutions, they are all given by<ref name=James>{{cite journal|last=James|first=M.|title=The generalised inverse|journal=Mathematical Gazette|volume=62|issue=420|date=June 1978|pages=109–14|doi=10.1017/S0025557200086460}}</ref>


:<math>x = A^+ b + \leftw</math> <math display="block">x = A^+ b + \leftw</math>


for arbitrary vector {{tmath| w }}. Solution(s) exist if and only if <math>A A^+ b = b</math>.<ref name=James/> If the latter holds, then the solution is unique if and only if {{tmath| A }} has full column rank, in which case {{tmath| \left }} is a zero matrix. If solutions exist but {{tmath| A }} does not have full column rank, then we have an ], all of whose infinitude of solutions are given by this last equation. for arbitrary vector {{tmath| w }}. Solution(s) exist if and only if <math>A A^+ b = b</math>.<ref name=James/> If the latter holds, then the solution is unique if and only if {{tmath| A }} has full column rank, in which case {{tmath| \left }} is a zero matrix. If solutions exist but {{tmath| A }} does not have full column rank, then we have an ], all of whose infinitude of solutions are given by this last equation.
Line 325: Line 325:
===Condition number=== ===Condition number===
Using the pseudoinverse and a ], one can define a ] for any matrix: Using the pseudoinverse and a ], one can define a ] for any matrix:
:<math>\mbox{cond}(A) = \|A\| \left\|A^+\right\|. </math> <math display="block">\mbox{cond}(A) = \|A\| \left\|A^+\right\|. </math>


A large condition number implies that the problem of finding least-squares solutions to the corresponding system of linear equations is ill-conditioned in the sense that small errors in the entries of {{tmath| A }} can lead to huge errors in the entries of the solution.<ref name=hagen/> A large condition number implies that the problem of finding least-squares solutions to the corresponding system of linear equations is ill-conditioned in the sense that small errors in the entries of {{tmath| A }} can lead to huge errors in the entries of the solution.<ref name=hagen/>

Revision as of 17:00, 10 May 2021

In mathematics, and in particular linear algebra, the Moore–Penrose inverse A + {\displaystyle A^{+}} ⁠ of a matrix A {\displaystyle A} ⁠ is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

A common use of the pseudoinverse is to compute a "best fit" (least squares) solution to a system of linear equations that lacks a solution (see below under § Applications). Another use is to find the minimum (Euclidean) norm solution to a system of linear equations with multiple solutions. The pseudoinverse facilitates the statement and proof of results in linear algebra.

The pseudoinverse is defined and unique for all matrices whose entries are real or complex numbers. It can be computed using the singular value decomposition.

Notation

In the following discussion, the following conventions are adopted.

  • k {\displaystyle \mathbb {k} } ⁠ will denote one of the fields of real or complex numbers, denoted ⁠ R {\displaystyle \mathbb {R} } ⁠, ⁠ C {\displaystyle \mathbb {C} } ⁠, respectively. The vector space of ⁠ m × n {\displaystyle m\times n} ⁠ matrices over ⁠ k {\displaystyle \mathbb {k} } ⁠ is denoted by ⁠ k m × n {\displaystyle \mathbb {k} ^{m\times n}} ⁠.
  • For ⁠ A k m × n {\displaystyle A\in \mathbb {k} ^{m\times n}} ⁠, ⁠ A T {\displaystyle A^{\textsf {T}}} ⁠ and ⁠ A {\displaystyle A^{*}} ⁠ denote the transpose and Hermitian transpose (also called conjugate transpose) respectively. If k = R {\displaystyle \mathbb {k} =\mathbb {R} } , then A = A T {\displaystyle A^{*}=A^{\textsf {T}}} .
  • For ⁠ A k m × n {\displaystyle A\in \mathbb {k} ^{m\times n}} ⁠, ⁠ ran ( A ) {\displaystyle \operatorname {ran} (A)} ⁠ (standing for "range") denotes the column space (image) of ⁠ A {\displaystyle A} ⁠ (the space spanned by the column vectors of ⁠ A {\displaystyle A} ⁠) and ⁠ ker ( A ) {\displaystyle \ker(A)} ⁠ denotes the kernel (null space) of ⁠ A {\displaystyle A} ⁠.
  • Finally, for any positive integer ⁠ n {\displaystyle n} ⁠, ⁠ I n k n × n {\displaystyle I_{n}\in \mathbb {k} ^{n\times n}} ⁠ denotes the ⁠ n × n {\displaystyle n\times n} identity matrix.

Definition

For ⁠ A k m × n {\displaystyle A\in \mathbb {k} ^{m\times n}} ⁠, a pseudoinverse of A is defined as a matrix ⁠ A + k n × m {\displaystyle A^{+}\in \mathbb {k} ^{n\times m}} ⁠ satisfying all of the following four criteria, known as the Moore–Penrose conditions:

  1. A A + A {\displaystyle AA^{+}A} ⁠ ⁠ = A {\displaystyle =\;A}
    A A + {\displaystyle AA^{+}} ⁠ need not be the general identity matrix, but it maps all column vectors of A to themselves;
  2. A + A A + {\displaystyle A^{+}AA^{+}} ⁠ ⁠ = A + {\displaystyle =\;A^{+}}
    A + {\displaystyle A^{+}} ⁠ acts like a weak inverse;
  3. ( A A + ) {\displaystyle \left(AA^{+}\right)^{*}} ⁠ ⁠ = A A + {\displaystyle =\;AA^{+}}
    A A + {\displaystyle AA^{+}} ⁠ is Hermitian;
  4. ( A + A ) {\displaystyle \left(A^{+}A\right)^{*}} ⁠ ⁠ = A + A {\displaystyle =\;A^{+}A}
    A + A {\displaystyle A^{+}A} ⁠ is also Hermitian.

A + {\displaystyle A^{+}} ⁠ exists for any matrix A , but, when the latter has full rank (that is, the rank of A is ⁠ min { m , n } {\displaystyle \min\{m,n\}} ⁠), then ⁠ A + {\displaystyle A^{+}} ⁠ can be expressed as a simple algebraic formula.

In particular, when ⁠ A {\displaystyle A} ⁠ has linearly independent columns (and thus matrix ⁠ A A {\displaystyle A^{*}A} ⁠ is invertible), ⁠ A + {\displaystyle A^{+}} ⁠ can be computed as A + = ( A A ) 1 A . {\displaystyle A^{+}=\left(A^{*}A\right)^{-1}A^{*}.}

This particular pseudoinverse constitutes a left inverse, since, in this case, A + A = I {\displaystyle A^{+}A=I} .

When A has linearly independent rows (matrix ⁠ A A {\displaystyle AA^{*}} ⁠ is invertible), ⁠ A + {\displaystyle A^{+}} ⁠ can be computed as A + = A ( A A ) 1 . {\displaystyle A^{+}=A^{*}\left(AA^{*}\right)^{-1}.}

This is a right inverse, as A A + = I {\displaystyle AA^{+}=I} .

Properties

Proofs for some of these facts may be found on a separate page, Proofs involving the Moore–Penrose inverse.

Existence and uniqueness

The pseudoinverse exists and is unique: for any matrix ⁠ A {\displaystyle A} ⁠, there is precisely one matrix ⁠ A + {\displaystyle A^{+}} ⁠, that satisfies the four properties of the definition.

A matrix satisfying the first condition of the definition is known as a generalized inverse. If the matrix also satisfies the second definition, it is called a generalized reflexive inverse. Generalized inverses always exist but are not in general unique. Uniqueness is a consequence of the last two conditions.

Basic properties

  • If ⁠ A {\displaystyle A} ⁠ has real entries, then so does ⁠ A + {\displaystyle A^{+}} ⁠.
  • If ⁠ A {\displaystyle A} ⁠ is invertible, its pseudoinverse is its inverse. That is, A + = A 1 {\displaystyle A^{+}=A^{-1}} .
  • The pseudoinverse of a zero matrix is its transpose.
  • The pseudoinverse of the pseudoinverse is the original matrix: ( A + ) + = A {\displaystyle \left(A^{+}\right)^{+}=A} .
  • Pseudoinversion commutes with transposition, complex conjugation, and taking the conjugate transpose:
    ( A T ) + = ( A + ) T {\displaystyle \left(A^{\textsf {T}}\right)^{+}=\left(A^{+}\right)^{\textsf {T}}} , ( A ¯ ) + = A + ¯ {\displaystyle \left({\overline {A}}\right)^{+}={\overline {A^{+}}}} , ( A ) + = ( A + ) {\displaystyle \left(A^{*}\right)^{+}=\left(A^{+}\right)^{*}} .
  • The pseudoinverse of a scalar multiple of ⁠ A {\displaystyle A} ⁠ is the reciprocal multiple of ⁠ A + {\displaystyle A^{+}} ⁠:
    ( α A ) + = α 1 A + {\displaystyle \left(\alpha A\right)^{+}=\alpha ^{-1}A^{+}} for ⁠ α 0 {\displaystyle \alpha \neq 0} ⁠.

Identities

The following identities can be used to cancel certain subexpressions or expand expressions involving pseudoinverses. Proofs for these properties can be found in the proofs subpage. A + = A + A + A = A A + A + , A = A + A A = A A A + , A = A A A + = A + A A . {\displaystyle {\begin{alignedat}{3}A^{+}={}&A^{+}&&A^{+*}&&A^{*}\\={}&A^{*}&&A^{+*}&&A^{+},\\A={}&A^{+*}&&A^{*}&&A\\={}&A&&A^{*}&&A^{+*},\\A^{*}={}&A^{*}&&A&&A^{+}\\={}&A^{+}&&A&&A^{*}.\end{alignedat}}}

Reduction to Hermitian case

The computation of the pseudoinverse is reducible to its construction in the Hermitian case. This is possible through the equivalences: A + = ( A A ) + A , {\displaystyle A^{+}=\left(A^{*}A\right)^{+}A^{*},} A + = A ( A A ) + , {\displaystyle A^{+}=A^{*}\left(AA^{*}\right)^{+},}

as ⁠ A A {\displaystyle A^{*}A} ⁠ and ⁠ A A {\displaystyle AA^{*}} ⁠ are Hermitian.

Products

Suppose ⁠ A k m × n ,   B k n × p {\displaystyle A\in \mathbb {k} ^{m\times n},\ B\in \mathbb {k} ^{n\times p}} ⁠. Then the following are equivalent:

  1. ( A B ) + = B + A + {\displaystyle (AB)^{+}=B^{+}A^{+}}
  2. A + A B B A = B B A , B B + A A B = A A B . {\textstyle {\begin{aligned}A^{+}ABB^{*}A^{*}&=BB^{*}A^{*},\\BB^{+}A^{*}AB&=A^{*}AB.\end{aligned}}}
  3. ( A + A B B ) = A + A B B , ( A A B B + ) = A A B B + . {\displaystyle {\begin{aligned}\left(A^{+}ABB^{*}\right)^{*}&=A^{+}ABB^{*},\\\left(A^{*}ABB^{+}\right)^{*}&=A^{*}ABB^{+}.\end{aligned}}}
  4. A + A B B A A B B + = B B A A {\displaystyle A^{+}ABB^{*}A^{*}ABB^{+}=BB^{*}A^{*}A}
  5. A + A B = B ( A B ) + A B , B B + A = A A B ( A B ) + . {\displaystyle {\begin{aligned}A^{+}AB&=B(AB)^{+}AB,\\BB^{+}A^{*}&=A^{*}AB(AB)^{+}.\end{aligned}}}

The following are sufficient conditions for ⁠ ( A B ) + = B + A + {\displaystyle (AB)^{+}=B^{+}A^{+}} ⁠:

  1. A {\displaystyle A} ⁠ has orthonormal columns (then A A = A + A = I n {\displaystyle A^{*}A=A^{+}A=I_{n}} ),   or
  2. B {\displaystyle B} ⁠ has orthonormal rows (then B B = B B + = I n {\displaystyle BB^{*}=BB^{+}=I_{n}} ),   or
  3. A {\displaystyle A} ⁠ has linearly independent columns (then A + A = I {\displaystyle A^{+}A=I} ) and ⁠ B {\displaystyle B} ⁠ has linearly independent rows (then B B + = I {\displaystyle BB^{+}=I} ),   or
  4. B = A {\displaystyle B=A^{*}} , or
  5. B = A + {\displaystyle B=A^{+}} .

The following is a necessary condition for ⁠ ( A B ) + = B + A + {\displaystyle (AB)^{+}=B^{+}A^{+}} ⁠:

  1. ( A + A ) ( B B + ) = ( B B + ) ( A + A ) {\displaystyle (A^{+}A)(BB^{+})=(BB^{+})(A^{+}A)}

The last sufficient condition yields the equalities ( A A ) + = A + A + , ( A A ) + = A + A + . {\displaystyle {\begin{aligned}\left(AA^{*}\right)^{+}&=A^{+*}A^{+},\\\left(A^{*}A\right)^{+}&=A^{+}A^{+*}.\end{aligned}}}

NB: The equality ⁠ ( A B ) + = B + A + {\displaystyle (AB)^{+}=B^{+}A^{+}} ⁠ does not hold in general. See the counterexample:

( ( 1 1 0 0 ) ( 0 0 1 1 ) ) + = ( 1 1 0 0 ) + = ( 1 2 0 1 2 0 ) ( 1 4 0 1 4 0 ) = ( 0 1 2 0 1 2 ) ( 1 2 0 1 2 0 ) = ( 0 0 1 1 ) + ( 1 1 0 0 ) + {\displaystyle {\Biggl (}{\begin{pmatrix}1&1\\0&0\end{pmatrix}}{\begin{pmatrix}0&0\\1&1\end{pmatrix}}{\Biggr )}^{+}={\begin{pmatrix}1&1\\0&0\end{pmatrix}}^{+}={\begin{pmatrix}{\tfrac {1}{2}}&0\\{\tfrac {1}{2}}&0\end{pmatrix}}\quad \neq \quad {\begin{pmatrix}{\tfrac {1}{4}}&0\\{\tfrac {1}{4}}&0\end{pmatrix}}={\begin{pmatrix}0&{\tfrac {1}{2}}\\0&{\tfrac {1}{2}}\end{pmatrix}}{\begin{pmatrix}{\tfrac {1}{2}}&0\\{\tfrac {1}{2}}&0\end{pmatrix}}={\begin{pmatrix}0&0\\1&1\end{pmatrix}}^{+}{\begin{pmatrix}1&1\\0&0\end{pmatrix}}^{+}}

Projectors

P = A A + {\displaystyle P=AA^{+}} and Q = A + A {\displaystyle Q=A^{+}A} are orthogonal projection operators, that is, they are Hermitian ( P = P {\displaystyle P=P^{*}} , Q = Q {\displaystyle Q=Q^{*}} ) and idempotent ( P 2 = P {\displaystyle P^{2}=P} and Q 2 = Q {\displaystyle Q^{2}=Q} ). The following hold:

  • P A = A Q = A {\displaystyle PA=AQ=A} and A + P = Q A + = A + {\displaystyle A^{+}P=QA^{+}=A^{+}}
  • P {\displaystyle P} ⁠ is the orthogonal projector onto the range of ⁠ A {\displaystyle A} ⁠ (which equals the orthogonal complement of the kernel of ⁠ A {\displaystyle A^{*}} ⁠).
  • Q {\displaystyle Q} ⁠ is the orthogonal projector onto the range of ⁠ A {\displaystyle A^{*}} ⁠ (which equals the orthogonal complement of the kernel of ⁠ A {\displaystyle A} ⁠).
  • ( I Q ) = ( I A + A ) {\displaystyle (I-Q)=\left(I-A^{+}A\right)} is the orthogonal projector onto the kernel of ⁠ A {\displaystyle A} ⁠.
  • ( I P ) = ( I A A + ) {\displaystyle (I-P)=\left(I-AA^{+}\right)} is the orthogonal projector onto the kernel of ⁠ A {\displaystyle A^{*}} ⁠.

The last two properties imply the following identities:

  • A   ( I A + A ) = ( I A A + ) A     = 0 {\displaystyle A\,\ \left(I-A^{+}A\right)=\left(I-AA^{+}\right)A\ \ =0}
  • A ( I A A + ) = ( I A + A ) A = 0 {\displaystyle A^{*}\left(I-AA^{+}\right)=\left(I-A^{+}A\right)A^{*}=0}

Another property is the following: if ⁠ A k n × n {\displaystyle A\in \mathbb {k} ^{n\times n}} ⁠ is Hermitian and idempotent (true if and only if it represents an orthogonal projection), then, for any matrix ⁠ B k m × n {\displaystyle B\in \mathbb {k} ^{m\times n}} ⁠ the following equation holds: A ( B A ) + = ( B A ) + {\displaystyle A(BA)^{+}=(BA)^{+}}

This can be proven by defining matrices C = B A {\displaystyle C=BA} , D = A ( B A ) + {\displaystyle D=A(BA)^{+}} , and checking that ⁠ D {\displaystyle D} ⁠ is indeed a pseudoinverse for ⁠ C {\displaystyle C} ⁠ by verifying that the defining properties of the pseudoinverse hold, when ⁠ A {\displaystyle A} ⁠ is Hermitian and idempotent.

From the last property it follows that, if ⁠ A k n × n {\displaystyle A\in \mathbb {k} ^{n\times n}} ⁠ is Hermitian and idempotent, for any matrix ⁠ B k n × m {\displaystyle B\in \mathbb {k} ^{n\times m}} ( A B ) + A = ( A B ) + {\displaystyle (AB)^{+}A=(AB)^{+}}

Finally, if ⁠ A {\displaystyle A} ⁠ is an orthogonal projection matrix, then its pseudoinverse trivially coincides with the matrix itself, that is, A + = A {\displaystyle A^{+}=A} .

Geometric construction

If we view the matrix as a linear map ⁠ A : k n k m {\displaystyle A:\mathbb {k} ^{n}\to \mathbb {k} ^{m}} ⁠ over the field ⁠ k {\displaystyle \mathbb {k} } ⁠ then ⁠ A + : k m k n {\displaystyle A^{+}:\mathbb {k} ^{m}\to \mathbb {k} ^{n}} ⁠ can be decomposed as follows. We write ⁠ {\displaystyle \oplus } ⁠ for the direct sum, ⁠ {\displaystyle \perp } ⁠ for the orthogonal complement, ⁠ ker {\displaystyle \ker } ⁠ for the kernel of a map, and ⁠ ran {\displaystyle \operatorname {ran} } ⁠ for the image of a map. Notice that k n = ( ker A ) ker A {\displaystyle \mathbb {k} ^{n}=\left(\ker A\right)^{\perp }\oplus \ker A} and k m = ran A ( ran A ) {\displaystyle \mathbb {k} ^{m}=\operatorname {ran} A\oplus \left(\operatorname {ran} A\right)^{\perp }} . The restriction A : ( ker A ) ran A {\displaystyle A:\left(\ker A\right)^{\perp }\to \operatorname {ran} A} is then an isomorphism. This implies that ⁠ A + {\displaystyle A^{+}} ⁠ on ⁠ ran A {\displaystyle \operatorname {ran} A} ⁠ is the inverse of this isomorphism, and is zero on ( ran A ) . {\displaystyle \left(\operatorname {ran} A\right)^{\perp }.}

In other words: To find ⁠ A + b {\displaystyle A^{+}b} ⁠ for given ⁠ b {\displaystyle b} ⁠ in ⁠ k m {\displaystyle \mathbb {k} ^{m}} ⁠, first project ⁠ b {\displaystyle b} ⁠ orthogonally onto the range of ⁠ A {\displaystyle A} ⁠, finding a point ⁠ p ( b ) {\displaystyle p(b)} ⁠ in the range. Then form ⁠ A 1 ( { p ( b ) } ) {\displaystyle A^{-1}(\{p(b)\})} ⁠, that is, find those vectors in ⁠ k n {\displaystyle \mathbb {k} ^{n}} ⁠ that ⁠ A {\displaystyle A} ⁠ sends to ⁠ p ( b ) {\displaystyle p(b)} ⁠. This will be an affine subspace of ⁠ k n {\displaystyle \mathbb {k} ^{n}} ⁠ parallel to the kernel of ⁠ A {\displaystyle A} ⁠. The element of this subspace that has the smallest length (that is, is closest to the origin) is the answer ⁠ A + b {\displaystyle A^{+}b} ⁠ we are looking for. It can be found by taking an arbitrary member of ⁠ A 1 ( { p ( b ) } ) {\displaystyle A^{-1}(\{p(b)\})} ⁠ and projecting it orthogonally onto the orthogonal complement of the kernel of ⁠ A {\displaystyle A} ⁠.

This description is closely related to the Minimum norm solution to a linear system.

Subspaces

ker ( A + ) = ker ( A ) ran ( A + ) = ran ( A ) {\displaystyle {\begin{aligned}\ker \left(A^{+}\right)&=\ker \left(A^{*}\right)\\\operatorname {ran} \left(A^{+}\right)&=\operatorname {ran} \left(A^{*}\right)\end{aligned}}}

Limit relations

The pseudoinverse are limits: A + = lim δ 0 ( A A + δ I ) 1 A = lim δ 0 A ( A A + δ I ) 1 {\displaystyle A^{+}=\lim _{\delta \searrow 0}\left(A^{*}A+\delta I\right)^{-1}A^{*}=\lim _{\delta \searrow 0}A^{*}\left(AA^{*}+\delta I\right)^{-1}} (see Tikhonov regularization). These limits exist even if ⁠ ( A A ) 1 {\displaystyle \left(AA^{*}\right)^{-1}} ⁠ or ⁠ ( A A ) 1 {\displaystyle \left(A^{*}A\right)^{-1}} ⁠ do not exist.

Continuity

In contrast to ordinary matrix inversion, the process of taking pseudoinverses is not continuous: if the sequence ⁠ ( A n ) {\displaystyle \left(A_{n}\right)} ⁠ converges to the matrix ⁠ A {\displaystyle A} ⁠ (in the maximum norm or Frobenius norm, say), then ⁠ ( A n ) + {\displaystyle (A_{n})^{+}} ⁠ need not converge to ⁠ A + {\displaystyle A^{+}} ⁠. However, if all the matrices ⁠ A n {\displaystyle A_{n}} ⁠ have the same rank as ⁠ A {\displaystyle A} ⁠, ⁠ ( A n ) + {\displaystyle (A_{n})^{+}} ⁠ will converge to ⁠ A + {\displaystyle A^{+}} ⁠.

Derivative

The derivative of a real valued pseudoinverse matrix which has constant rank at a point ⁠ x {\displaystyle x} ⁠ may be calculated in terms of the derivative of the original matrix: d d x A + ( x ) = A + ( d d x A ) A +   +   A + A + T ( d d x A T ) ( I A A + )   +   ( I A + A ) ( d d x A T ) A + T A + {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}A^{+}(x)=-A^{+}\left({\frac {\mathrm {d} }{\mathrm {d} x}}A\right)A^{+}~+~A^{+}A^{+{\textsf {T}}}\left({\frac {\mathrm {d} }{\mathrm {d} x}}A^{\textsf {T}}\right)\left(I-AA^{+}\right)~+~\left(I-A^{+}A\right)\left({\frac {\text{d}}{{\text{d}}x}}A^{\textsf {T}}\right)A^{+{\textsf {T}}}A^{+}}

Examples

Since for invertible matrices the pseudoinverse equals the usual inverse, only examples of non-invertible matrices are considered below.

  • For A = ( 0 0 0 0 ) , {\displaystyle A={\begin{pmatrix}0&0\\0&0\end{pmatrix}},} the pseudoinverse is A + = ( 0 0 0 0 ) . {\displaystyle A^{+}={\begin{pmatrix}0&0\\0&0\end{pmatrix}}.} (Generally, the pseudoinverse of a zero matrix is its transpose.) The uniqueness of this pseudoinverse can be seen from the requirement A + = A + A A + {\displaystyle A^{+}=A^{+}AA^{+}} , since multiplication by a zero matrix would always produce a zero matrix.
  • For A = ( 1 0 1 0 ) , {\displaystyle A={\begin{pmatrix}1&0\\1&0\end{pmatrix}},} the pseudoinverse is A + = ( 1 2 1 2 0 0 ) . {\displaystyle A^{+}={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\0&0\end{pmatrix}}.}

    Indeed, A A + = ( 1 2 1 2 1 2 1 2 ) , {\displaystyle A\,A^{+}={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\{\frac {1}{2}}&{\frac {1}{2}}\end{pmatrix}},} and thus A A + A = ( 1 0 1 0 ) = A . {\displaystyle A\,A^{+}A={\begin{pmatrix}1&0\\1&0\end{pmatrix}}=A.}

    Similarly, A + A = ( 1 0 0 0 ) , {\displaystyle A^{+}A={\begin{pmatrix}1&0\\0&0\end{pmatrix}},} and thus A + A A + = ( 1 2 1 2 0 0 ) = A + . {\displaystyle A^{+}A\,A^{+}={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\0&0\end{pmatrix}}=A^{+}.}
  • For A = ( 1 0 1 0 ) , {\displaystyle A={\begin{pmatrix}1&0\\-1&0\end{pmatrix}},} A + = ( 1 2 1 2 0 0 ) . {\displaystyle A^{+}={\begin{pmatrix}{\frac {1}{2}}&-{\frac {1}{2}}\\0&0\end{pmatrix}}.}
  • For A = ( 1 0 2 0 ) , {\displaystyle A={\begin{pmatrix}1&0\\2&0\end{pmatrix}},} A + = ( 1 5 2 5 0 0 ) . {\displaystyle A^{+}={\begin{pmatrix}{\frac {1}{5}}&{\frac {2}{5}}\\0&0\end{pmatrix}}.} (The denominators are 5 = 1 2 + 2 2 {\displaystyle 5=1^{2}+2^{2}} .)
  • For A = ( 1 1 1 1 ) , {\displaystyle A={\begin{pmatrix}1&1\\1&1\end{pmatrix}},} A + = ( 1 4 1 4 1 4 1 4 ) . {\displaystyle A^{+}={\begin{pmatrix}{\frac {1}{4}}&{\frac {1}{4}}\\{\frac {1}{4}}&{\frac {1}{4}}\end{pmatrix}}.}
  • For A = ( 1 0 0 1 0 1 ) , {\displaystyle A={\begin{pmatrix}1&0\\0&1\\0&1\end{pmatrix}},} the pseudoinverse is A + = ( 1 0 0 0 1 2 1 2 ) . {\displaystyle A^{+}={\begin{pmatrix}1&0&0\\0&{\frac {1}{2}}&{\frac {1}{2}}\end{pmatrix}}.} For this matrix, the left inverse exists and thus equals A + {\displaystyle A^{+}} , indeed, A + A = ( 1 0 0 1 ) . {\displaystyle A^{+}A={\begin{pmatrix}1&0\\0&1\end{pmatrix}}.}

Special cases

Scalars

It is also possible to define a pseudoinverse for scalars and vectors. This amounts to treating these as matrices. The pseudoinverse of a scalar ⁠ x {\displaystyle x} ⁠ is zero if ⁠ x {\displaystyle x} ⁠ is zero and the reciprocal of ⁠ x {\displaystyle x} ⁠ otherwise: x + = { 0 , if  x = 0 ; x 1 , otherwise . {\displaystyle x^{+}={\begin{cases}0,&{\mbox{if }}x=0;\\x^{-1},&{\mbox{otherwise}}.\end{cases}}}

Vectors

The pseudoinverse of the null (all zero) vector is the transposed null vector. The pseudoinverse of a non-null vector is the conjugate transposed vector divided by its squared magnitude: x + = { 0 T , if  x = 0 ; x x x , otherwise . {\displaystyle {\vec {x}}^{+}={\begin{cases}{\vec {0}}^{\textsf {T}},&{\mbox{if }}{\vec {x}}={\vec {0}};\\{\dfrac {{\vec {x}}^{*}}{{\vec {x}}^{*}{\vec {x}}}},&{\mbox{otherwise}}.\end{cases}}}

Linearly independent columns

If the columns of ⁠ A {\displaystyle A} ⁠ are linearly independent (so that ⁠ m n {\displaystyle m\geq n} ⁠), then ⁠ A A {\displaystyle A^{*}A} ⁠ is invertible. In this case, an explicit formula is: A + = ( A A ) 1 A {\displaystyle A^{+}=\left(A^{*}A\right)^{-1}A^{*}} .

It follows that ⁠ A + {\displaystyle A^{+}} ⁠ is then a left inverse of ⁠ A {\displaystyle A} ⁠:   A + A = I n {\displaystyle A^{+}A=I_{n}} .

Linearly independent rows

If the rows of ⁠ A {\displaystyle A} ⁠ are linearly independent (so that ⁠ m n {\displaystyle m\leq n} ⁠), then ⁠ A A {\displaystyle AA^{*}} ⁠ is invertible. In this case, an explicit formula is: A + = A ( A A ) 1 {\displaystyle A^{+}=A^{*}\left(AA^{*}\right)^{-1}} .

It follows that ⁠ A + {\displaystyle A^{+}} ⁠ is a right inverse of ⁠ A {\displaystyle A} ⁠:   A A + = I m {\displaystyle AA^{+}=I_{m}} .

Orthonormal columns or rows

This is a special case of either full column rank or full row rank (treated above). If ⁠ A {\displaystyle A} ⁠ has orthonormal columns ( A A = I n {\displaystyle A^{*}A=I_{n}} ) or orthonormal rows ( A A = I m {\displaystyle AA^{*}=I_{m}} ), then: A + = A . {\displaystyle A^{+}=A^{*}.}

Normal matrices

If ⁠ A {\displaystyle A} ⁠ is a Normal matrix; that is, it commutes with its conjugate transpose; then its pseudoinverse can be computed by diagonalizing it, mapping all nonzero eigenvalues to their inverses, and mapping zero eigenvalues to zero. A corollary is that ⁠ A {\displaystyle A} ⁠ commuting with its transpose implies that it commutes with its pseudoinverse.

Orthogonal projection matrices

This is a special case of a Normal matrix with eigenvalues 0 and 1. If ⁠ A {\displaystyle A} ⁠ is an orthogonal projection matrix, that is, A = A {\displaystyle A=A^{*}} and A 2 = A {\displaystyle A^{2}=A} , then the pseudoinverse trivially coincides with the matrix itself: A + = A . {\displaystyle A^{+}=A.}

Circulant matrices

For a circulant matrix C {\displaystyle C} ⁠, the singular value decomposition is given by the Fourier transform, that is, the singular values are the Fourier coefficients. Let ⁠ F {\displaystyle {\mathcal {F}}} ⁠ be the Discrete Fourier Transform (DFT) matrix, then C = F Σ F C + = F Σ + F {\displaystyle {\begin{aligned}C&={\mathcal {F}}\cdot \Sigma \cdot {\mathcal {F}}^{*}\\C^{+}&={\mathcal {F}}\cdot \Sigma ^{+}\cdot {\mathcal {F}}^{*}\end{aligned}}}

Construction

Rank decomposition

Let ⁠ r min ( m , n ) {\displaystyle r\leq \min(m,n)} ⁠ denote the rank of ⁠ A k m × n {\displaystyle A\in \mathbb {k} ^{m\times n}} ⁠. Then ⁠ A {\displaystyle A} ⁠ can be (rank) decomposed as A = B C {\displaystyle A=BC} where ⁠ B k m × r {\displaystyle B\in \mathbb {k} ^{m\times r}} ⁠ and ⁠ C k r × n {\displaystyle C\in \mathbb {k} ^{r\times n}} ⁠ are of rank ⁠ r {\displaystyle r} ⁠. Then A + = C + B + = C ( C C ) 1 ( B B ) 1 B {\displaystyle A^{+}=C^{+}B^{+}=C^{*}\left(CC^{*}\right)^{-1}\left(B^{*}B\right)^{-1}B^{*}} .

The QR method

For k { R , C } {\displaystyle \mathbb {k} \in \{\mathbb {R} ,\mathbb {C} \}} computing the product ⁠ A A {\displaystyle AA^{*}} ⁠ or ⁠ A A {\displaystyle A^{*}A} ⁠ and their inverses explicitly is often a source of numerical rounding errors and computational cost in practice. An alternative approach using the QR decomposition of ⁠ A {\displaystyle A} ⁠ may be used instead.

Consider the case when ⁠ A {\displaystyle A} ⁠ is of full column rank, so that A + = ( A A ) 1 A {\displaystyle A^{+}=\left(A^{*}A\right)^{-1}A^{*}} . Then the Cholesky decomposition A A = R R {\displaystyle A^{*}A=R^{*}R} , where ⁠ R {\displaystyle R} ⁠ is an upper triangular matrix, may be used. Multiplication by the inverse is then done easily by solving a system with multiple right-hand sides, A + = ( A A ) 1 A ( A A ) A + = A R R A + = A {\displaystyle A^{+}=\left(A^{*}A\right)^{-1}A^{*}\quad \Leftrightarrow \quad \left(A^{*}A\right)A^{+}=A^{*}\quad \Leftrightarrow \quad R^{*}RA^{+}=A^{*}}

which may be solved by forward substitution followed by back substitution.

The Cholesky decomposition may be computed without forming ⁠ A A {\displaystyle A^{*}A} ⁠ explicitly, by alternatively using the QR decomposition of A = Q R {\displaystyle A=QR} , where Q {\displaystyle Q} has orthonormal columns, Q Q = I {\displaystyle Q^{*}Q=I} , and ⁠ R {\displaystyle R} ⁠ is upper triangular. Then A A = ( Q R ) ( Q R ) = R Q Q R = R R , {\displaystyle A^{*}A\,=\,(QR)^{*}(QR)\,=\,R^{*}Q^{*}QR\,=\,R^{*}R,}

so ⁠ R {\displaystyle R} ⁠ is the Cholesky factor of ⁠ A A {\displaystyle A^{*}A} ⁠.

The case of full row rank is treated similarly by using the formula A + = A ( A A ) 1 {\displaystyle A^{+}=A^{*}\left(AA^{*}\right)^{-1}} and using a similar argument, swapping the roles of ⁠ A {\displaystyle A} ⁠ and ⁠ A {\displaystyle A^{*}} ⁠.

Singular value decomposition (SVD)

A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition. If A = U Σ V {\displaystyle A=U\Sigma V^{*}} is the singular value decomposition of ⁠ A {\displaystyle A} ⁠, then A + = V Σ + U {\displaystyle A^{+}=V\Sigma ^{+}U^{*}} . For a rectangular diagonal matrix such as ⁠ Σ {\displaystyle \Sigma } ⁠, we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and then transposing the matrix. In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros. For example, in the MATLAB, GNU Octave, or NumPy function pinv, the tolerance is taken to be t = ε⋅max(m, n)⋅max(Σ), where ε is the machine epsilon.

The computational cost of this method is dominated by the cost of computing the SVD, which is several times higher than matrix–matrix multiplication, even if a state-of-the art implementation (such as that of LAPACK) is used.

The above procedure shows why taking the pseudoinverse is not a continuous operation: if the original matrix ⁠ A {\displaystyle A} ⁠ has a singular value 0 (a diagonal entry of the matrix ⁠ Σ {\displaystyle \Sigma } ⁠ above), then modifying ⁠ A {\displaystyle A} ⁠ slightly may turn this zero into a tiny positive number, thereby affecting the pseudoinverse dramatically as we now have to take the reciprocal of a tiny number.

Block matrices

Optimized approaches exist for calculating the pseudoinverse of block structured matrices.

The iterative method of Ben-Israel and Cohen

Another method for computing the pseudoinverse (cf. Drazin inverse) uses the recursion A i + 1 = 2 A i A i A A i , {\displaystyle A_{i+1}=2A_{i}-A_{i}AA_{i},}

which is sometimes referred to as hyper-power sequence. This recursion produces a sequence converging quadratically to the pseudoinverse of ⁠ A {\displaystyle A} ⁠ if it is started with an appropriate ⁠ A 0 {\displaystyle A_{0}} ⁠ satisfying A 0 A = ( A 0 A ) {\displaystyle A_{0}A=\left(A_{0}A\right)^{*}} . The choice A 0 = α A {\displaystyle A_{0}=\alpha A^{*}} (where 0 < α < 2 / σ 1 2 ( A ) {\displaystyle 0<\alpha <2/\sigma _{1}^{2}(A)} , with ⁠ σ 1 ( A ) {\displaystyle \sigma _{1}(A)} ⁠ denoting the largest singular value of ⁠ A {\displaystyle A} ⁠) has been argued not to be competitive to the method using the SVD mentioned above, because even for moderately ill-conditioned matrices it takes a long time before ⁠ A i {\displaystyle A_{i}} ⁠ enters the region of quadratic convergence. However, if started with ⁠ A 0 {\displaystyle A_{0}} ⁠ already close to the Moore–Penrose inverse and A 0 A = ( A 0 A ) {\displaystyle A_{0}A=\left(A_{0}A\right)^{*}} , for example A 0 := ( A A + δ I ) 1 A {\displaystyle A_{0}:=\left(A^{*}A+\delta I\right)^{-1}A^{*}} , convergence is fast (quadratic).

Updating the pseudoinverse

For the cases where ⁠ A {\displaystyle A} ⁠ has full row or column rank, and the inverse of the correlation matrix (⁠ A A {\displaystyle AA^{*}} ⁠ for ⁠ A {\displaystyle A} ⁠ with full row rank or ⁠ A A {\displaystyle A^{*}A} ⁠ for full column rank) is already known, the pseudoinverse for matrices related to ⁠ A {\displaystyle A} ⁠ can be computed by applying the Sherman–Morrison–Woodbury formula to update the inverse of the correlation matrix, which may need less work. In particular, if the related matrix differs from the original one by only a changed, added or deleted row or column, incremental algorithms exist that exploit the relationship.

Similarly, it is possible to update the Cholesky factor when a row or column is added, without creating the inverse of the correlation matrix explicitly. However, updating the pseudoinverse in the general rank-deficient case is much more complicated.

Software libraries

High-quality implementations of SVD, QR, and back substitution are available in standard libraries, such as LAPACK. Writing one's own implementation of SVD is a major programming project that requires a significant numerical expertise. In special circumstances, such as parallel computing or embedded computing, however, alternative implementations by QR or even the use of an explicit inverse might be preferable, and custom implementations may be unavoidable.

The Python package NumPy provides a pseudoinverse calculation through its functions matrix.I and linalg.pinv; its pinv uses the SVD-based algorithm. SciPy adds a function scipy.linalg.pinv that uses a least-squares solver.

The MASS package for R provides a calculation of the Moore–Penrose inverse through the ginv function. The ginv function calculates a pseudoinverse using the singular value decomposition provided by the svd function in the base R package. An alternative is to employ the pinv function available in the pracma package.

The Octave programming language provides a pseudoinverse through the standard package function pinv and the pseudo_inverse() method.

In Julia (programming language), the LinearAlgebra package of the standard library provides an implementation of the Moore-Penrose inverse pinv() implemented via singular-value decomposition.

Applications

Linear least-squares

See also: Linear least squares (mathematics)

The pseudoinverse provides a least squares solution to a system of linear equations. For ⁠ A k m × n {\displaystyle A\in \mathbb {k} ^{m\times n}} ⁠, given a system of linear equations A x = b , {\displaystyle Ax=b,}

in general, a vector ⁠ x {\displaystyle x} ⁠ that solves the system may not exist, or if one does exist, it may not be unique. The pseudoinverse solves the "least-squares" problem as follows:

  • x k n {\displaystyle \forall x\in \mathbb {k} ^{n}} ⁠, we have A x b 2 A z b 2 {\displaystyle \left\|Ax-b\right\|_{2}\geq \left\|Az-b\right\|_{2}} where z = A + b {\displaystyle z=A^{+}b} and 2 {\displaystyle \|\cdot \|_{2}} denotes the Euclidean norm. This weak inequality holds with equality if and only if x = A + b + ( I A + A ) w {\displaystyle x=A^{+}b+\left(I-A^{+}A\right)w} for any vector ⁠ w {\displaystyle w} ⁠; this provides an infinitude of minimizing solutions unless ⁠ A {\displaystyle A} ⁠ has full column rank, in which case ⁠ ( I A + A ) {\displaystyle \left(I-A^{+}A\right)} ⁠ is a zero matrix. The solution with minimum Euclidean norm is ⁠ z . {\displaystyle z.}

This result is easily extended to systems with multiple right-hand sides, when the Euclidean norm is replaced by the Frobenius norm. Let ⁠ B k m × p {\displaystyle B\in \mathbb {k} ^{m\times p}} ⁠.

  • X k n × p {\displaystyle \forall X\in \mathbb {k} ^{n\times p}} ⁠, we have A X B F A Z B F {\displaystyle \|AX-B\|_{\mathrm {F} }\geq \|AZ-B\|_{\mathrm {F} }} where Z = A + B {\displaystyle Z=A^{+}B} and F {\displaystyle \|\cdot \|_{\mathrm {F} }} denotes the Frobenius norm.

Obtaining all solutions of a linear system

If the linear system

A x = b {\displaystyle Ax=b}

has any solutions, they are all given by

x = A + b + [ I A + A ] w {\displaystyle x=A^{+}b+\leftw}

for arbitrary vector ⁠ w {\displaystyle w} ⁠. Solution(s) exist if and only if A A + b = b {\displaystyle AA^{+}b=b} . If the latter holds, then the solution is unique if and only if ⁠ A {\displaystyle A} ⁠ has full column rank, in which case ⁠ [ I A + A ] {\displaystyle \left} ⁠ is a zero matrix. If solutions exist but ⁠ A {\displaystyle A} ⁠ does not have full column rank, then we have an indeterminate system, all of whose infinitude of solutions are given by this last equation.

Minimum norm solution to a linear system

For linear systems A x = b , {\displaystyle Ax=b,} with non-unique solutions (such as under-determined systems), the pseudoinverse may be used to construct the solution of minimum Euclidean norm x 2 {\displaystyle \|x\|_{2}} among all solutions.

  • If A x = b {\displaystyle Ax=b} is satisfiable, the vector z = A + b {\displaystyle z=A^{+}b} is a solution, and satisfies z 2 x 2 {\displaystyle \|z\|_{2}\leq \|x\|_{2}} for all solutions.

This result is easily extended to systems with multiple right-hand sides, when the Euclidean norm is replaced by the Frobenius norm. Let ⁠ B k m × p {\displaystyle B\in \mathbb {k} ^{m\times p}} ⁠.

  • If A X = B {\displaystyle AX=B} is satisfiable, the matrix Z = A + B {\displaystyle Z=A^{+}B} is a solution, and satisfies Z F X F {\displaystyle \|Z\|_{\mathrm {F} }\leq \|X\|_{\mathrm {F} }} for all solutions.

Condition number

Using the pseudoinverse and a matrix norm, one can define a condition number for any matrix: cond ( A ) = A A + . {\displaystyle {\mbox{cond}}(A)=\|A\|\left\|A^{+}\right\|.}

A large condition number implies that the problem of finding least-squares solutions to the corresponding system of linear equations is ill-conditioned in the sense that small errors in the entries of ⁠ A {\displaystyle A} ⁠ can lead to huge errors in the entries of the solution.

Generalizations

Besides for matrices over real and complex numbers, the conditions hold for matrices over biquaternions, also called "complex quaternions".

In order to solve more general least-squares problems, one can define Moore–Penrose inverses for all continuous linear operators ⁠ A : H 1 H 2 {\displaystyle A:H_{1}\rightarrow H_{2}} ⁠ between two Hilbert spaces H 1 {\displaystyle H_{1}} ⁠ and ⁠ H 2 {\displaystyle H_{2}} ⁠, using the same four conditions as in our definition above. It turns out that not every continuous linear operator has a continuous linear pseudoinverse in this sense. Those that do are precisely the ones whose range is closed in ⁠ H 2 {\displaystyle H_{2}} ⁠.

A notion of pseudoinverse exists for matrices over an arbitrary field equipped with an arbitrary involutive automorphism. In this more general setting, a given matrix doesn't always have a pseudoinverse. The necessary and sufficient condition for a pseudoinverse to exist is that rank ( A ) = rank ( A A ) = rank ( A A ) {\displaystyle \operatorname {rank} (A)=\operatorname {rank} \left(A^{*}A\right)=\operatorname {rank} \left(AA^{*}\right)} where A {\displaystyle A^{*}} denotes the result of applying the involution operation to the transpose of A {\displaystyle A} . When it does exist, it is unique. Example: Consider the field of complex numbers equipped with the identity involution (as opposed to the involution considered elsewhere in the article); do there exist matrices that fail to have pseudoinverses in this sense? Consider the matrix A = [ 1 i ] T {\displaystyle A={\begin{bmatrix}1&i\end{bmatrix}}^{\textsf {T}}} . Observe that rank ( A A T ) = 1 {\displaystyle \operatorname {rank} \left(AA^{\textsf {T}}\right)=1} while rank ( A T A ) = 0 {\displaystyle \operatorname {rank} \left(A^{\textsf {T}}A\right)=0} . So this matrix doesn't have a pseudoinverse in this sense.

In abstract algebra, a Moore–Penrose inverse may be defined on a *-regular semigroup. This abstract definition coincides with the one in linear algebra.

See also

Notes

  1. Ben-Israel & Greville 2003, p. 7.
  2. Campbell & Meyer, Jr. 1991, p. 10.
  3. Nakamura 1991, p. 42.
  4. Rao & Mitra 1971, p. 50–51.
  5. Moore, E. H. (1920). "On the reciprocal of the general algebraic matrix". Bulletin of the American Mathematical Society. 26 (9): 394–95. doi:10.1090/S0002-9904-1920-03322-7.
  6. Bjerhammar, Arne (1951). "Application of calculus of matrices to method of least squares; with special references to geodetic calculations". Trans. Roy. Inst. Tech. Stockholm. 49.
  7. ^ Penrose, Roger (1955). "A generalized inverse for matrices". Proceedings of the Cambridge Philosophical Society. 51 (3): 406–13. Bibcode:1955PCPS...51..406P. doi:10.1017/S0305004100030401.
  8. ^ Golub, Gene H.; Charles F. Van Loan (1996). Matrix computations (3rd ed.). Baltimore: Johns Hopkins. pp. 257–258. ISBN 978-0-8018-5414-9.
  9. ^ Stoer, Josef; Bulirsch, Roland (2002). Introduction to Numerical Analysis (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-95452-3..
  10. Greville, T. N. E. (1966-10-01). "Note on the Generalized Inverse of a Matrix Product". SIAM Review. 8 (4): 518–521. doi:10.1137/1008107. ISSN 0036-1445.
  11. Maciejewski, Anthony A.; Klein, Charles A. (1985). "Obstacle Avoidance for Kinematically Redundant Manipulators in Dynamically Varying Environments". International Journal of Robotics Research. 4 (3): 109–117. doi:10.1177/027836498500400308. hdl:10217/536. S2CID 17660144.
  12. Rakočević, Vladimir (1997). "On continuity of the Moore–Penrose and Drazin inverses" (PDF). Matematički Vesnik. 49: 163–72.
  13. Golub, G. H.; Pereyra, V. (April 1973). "The Differentiation of Pseudo-Inverses and Nonlinear Least Squares Problems Whose Variables Separate". SIAM Journal on Numerical Analysis. 10 (2): 413–32. Bibcode:1973SJNA...10..413G. doi:10.1137/0710036. JSTOR 2156365.
  14. ^ Ben-Israel & Greville 2003.
  15. Stallings, W. T.; Boullion, T. L. (1972). "The Pseudoinverse of an r-Circulant Matrix". Proceedings of the American Mathematical Society. 34 (2): 385–88. doi:10.2307/2038377. JSTOR 2038377.
  16. Linear Systems & Pseudo-Inverse
  17. Ben-Israel, Adi; Cohen, Dan (1966). "On Iterative Computation of Generalized Inverses and Associated Projections". SIAM Journal on Numerical Analysis. 3 (3): 410–19. Bibcode:1966SJNA....3..410B. doi:10.1137/0703035. JSTOR 2949637.pdf
  18. Söderström, Torsten; Stewart, G. W. (1974). "On the Numerical Properties of an Iterative Method for Computing the Moore–Penrose Generalized Inverse". SIAM Journal on Numerical Analysis. 11 (1): 61–74. Bibcode:1974SJNA...11...61S. doi:10.1137/0711008. JSTOR 2156431.
  19. Gramß, Tino (1992). Worterkennung mit einem künstlichen neuronalen Netzwerk (PhD dissertation). Georg-August-Universität zu Göttingen. OCLC 841706164.
  20. Emtiyaz, Mohammad (February 27, 2008). "Updating Inverse of a Matrix When a Column is Added/Removed" (Document). {{cite document}}: Cite document requires |publisher= (help); Unknown parameter |url= ignored (help)
  21. Meyer, Jr., Carl D. (1973). "Generalized inverses and ranks of block matrices". SIAM J. Appl. Math. 25 (4): 597–602. doi:10.1137/0125057.
  22. Meyer, Jr., Carl D. (1973). "Generalized inversion of modified matrices". SIAM J. Appl. Math. 24 (3): 315–23. doi:10.1137/0124033.
  23. "R: Generalized Inverse of a Matrix".
  24. "LinearAlgebra.pinv".
  25. Penrose, Roger (1956). "On best approximate solution of linear matrix equations". Proceedings of the Cambridge Philosophical Society. 52 (1): 17–19. Bibcode:1956PCPS...52...17P. doi:10.1017/S0305004100030929.
  26. ^ Planitz, M. (October 1979). "Inconsistent systems of linear equations". Mathematical Gazette. 63 (425): 181–85. doi:10.2307/3617890. JSTOR 3617890.
  27. ^ James, M. (June 1978). "The generalised inverse". Mathematical Gazette. 62 (420): 109–14. doi:10.1017/S0025557200086460.
  28. ^ Hagen, Roland; Roch, Steffen; Silbermann, Bernd (2001). "Section 2.1.2". C*-algebras and Numerical Analysis. CRC Press.
  29. Tian, Yongge (2000). "Matrix Theory over the Complex Quaternion Algebra". p.8, Theorem 3.5. arXiv:math/0004005.
  30. Pearl, Martin H. (1968-10-01). "Generalized inverses of matrices with entries taken from an arbitrary field". Linear Algebra and Its Applications. 1 (4): 571–587. doi:10.1016/0024-3795(68)90028-1. ISSN 0024-3795.

References

External links

Numerical linear algebra
Key concepts
Problems
Hardware
Software
Roger Penrose
Books
Coauthored books
Concepts
Related
Categories: