Misplaced Pages

Schur product theorem: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 19:41, 30 July 2018 edit96.227.134.54 (talk) General case← Previous edit Revision as of 01:49, 4 February 2019 edit undo89.107.5.192 (talk) paranthesesNext edit →
Line 6: Line 6:


For any matrices <math>M</math> and <math>N</math>, the Hadamard product <math>M \circ N</math> considered as a bilinear form acts on vectors <math>a, b</math> as For any matrices <math>M</math> and <math>N</math>, the Hadamard product <math>M \circ N</math> considered as a bilinear form acts on vectors <math>a, b</math> as
: <math>a^* (M \circ N) b = \operatorname{tr}(M^T \operatorname{diag}(a^*) N \operatorname{diag}(b))</math> : <math>a^* (M \circ N) b = \operatorname{tr}\left(M^T \operatorname{diag}\left(a^*\right) N \operatorname{diag}(b)\right)</math>
where <math>\operatorname{tr}</math> is the matrix ] and <math>\operatorname{diag}(a)</math> is the ] having as diagonal entries the elements of <math>a</math>.


Suppose <math>M</math> and <math>N</math> are positive definite, and so ]. We can consider their square-roots <math>M^{\frac 1 2}</math> and <math>N^{\frac 1 2}</math>, which are also Hermitian, and write where <math>\operatorname{tr}</math> is the matrix ] and <math>\operatorname{diag}(a)</math> is the ] having as diagonal entries the elements of <math>a</math>.

: <math>\operatorname{tr}(M^T \operatorname{diag}(a^*) N \operatorname{diag}(b)) = \operatorname{tr}(\overline{M}^{\frac 1 2} \overline{M}^{\frac 1 2} \operatorname{diag}(a^*) N^{\frac 1 2} N^{\frac 1 2} \operatorname{diag}(b)) = \operatorname{tr}(\overline{M}^{\frac 1 2} \operatorname{diag}(a^*) N^{\frac 1 2} N^{\frac 1 2} \operatorname{diag}(b) \overline{M}^{\frac 1 2})</math>
Then, for <math>a=b</math>, this is written as <math>\operatorname{tr}(A^* A)</math> for <math>A = N^{\frac 1 2} \operatorname{diag}(a) \overline{M}^{\frac 1 2}</math> Suppose <math>M</math> and <math>N</math> are positive definite, and so ]. We can consider their square-roots <math>M^\frac{1}{2}</math> and <math>N^\frac{1}{2}</math>, which are also Hermitian, and write
: <math>
and thus is strictly positive for <math>A\neq 0</math>, which occurs if and only if <math>a \neq 0</math>. This shows that <math>(M \circ N)</math> is a positive definite matrix.
\operatorname{tr}\left(M^T \operatorname{diag}\left(a^*\right) N \operatorname{diag}(b)\right) =
\operatorname{tr}\left(\overline{M}^\frac{1}{2} \overline{M}^\frac{1}{2} \operatorname{diag}\left(a^*\right) N^\frac{1}{2} N^\frac{1}{2} \operatorname{diag}(b)\right) =
\operatorname{tr}\left(\overline{M}^\frac{1}{2} \operatorname{diag}\left(a^*\right) N^\frac{1}{2} N^\frac{1}{2} \operatorname{diag}(b) \overline{M}^\frac{1}{2}\right)
</math>

Then, for <math>a = b</math>, this is written as <math>\operatorname{tr}\left(A^* A\right)</math> for <math>A = N^\frac{1}{2} \operatorname{diag}(a) \overline{M}^\frac{1}{2}</math> and thus is strictly positive for <math>A \neq 0</math>, which occurs if and only if <math>a \neq 0</math>. This shows that <math>(M \circ N)</math> is a positive definite matrix.


=== Proof using Gaussian integration === === Proof using Gaussian integration ===
Line 18: Line 23:
==== Case of ''M'' = ''N'' ==== ==== Case of ''M'' = ''N'' ====


Let <math>X</math> be an <math>n</math>-dimensional centered ] with ] <math>\langle X_i X_j \rangle = M_{ij}</math>. Let <math>X</math> be an <math>n</math>-dimensional centered ] with ] <math>\langle X_i X_j \rangle = M_{ij}</math>. Then the covariance matrix of <math>X_i^2</math> and <math>X_j^2</math> is
Then the covariance matrix of <math>X_i^2</math> and <math>X_j^2</math> is


: <math>\operatorname{Cov}(X_i^2, X_j^2) = \langle X_i^2 X_j^2 \rangle - \langle X_i^2 \rangle \langle X_j^2 \rangle</math> : <math>\operatorname{Cov}\left(X_i^2, X_j^2\right) = \left\langle X_i^2 X_j^2 \right\rangle - \left\langle X_i^2 \right\rangle \left\langle X_j^2 \right\rangle</math>


Using ] to develop <math>\langle X_i^2 X_j^2 \rangle = 2 \langle X_i X_j \rangle^2 + \langle X_i^2 \rangle \langle X_j^2 \rangle</math> we have Using ] to develop <math>\left\langle X_i^2 X_j^2 \right\rangle = 2 \left\langle X_i X_j \right\rangle^2 + \left\langle X_i^2 \right\rangle \left\langle X_j^2 \right\rangle</math> we have


: <math>\operatorname{Cov}(X_i^2, X_j^2) = 2 \langle X_i X_j \rangle^2 = 2 M_{ij}^2</math> : <math>\operatorname{Cov}\left(X_i^2, X_j^2\right) = 2 \left\langle X_i X_j \right\rangle^2 = 2 M_{ij}^2</math>


Since a covariance matrix is positive definite, this proves that the matrix with elements <math>M_{ij}^2</math> is a positive definite matrix. Since a covariance matrix is positive definite, this proves that the matrix with elements <math>M_{ij}^2</math> is a positive definite matrix.
Line 31: Line 35:
==== General case ==== ==== General case ====


Let <math>X</math> and <math>Y</math> be <math>n</math>-dimensional centered ]s with ]s <math>\langle X_i X_j \rangle = M_{ij}</math>, <math>\langle Y_i Y_j \rangle = N_{ij}</math> and independent from each other so that we have Let <math>X</math> and <math>Y</math> be <math>n</math>-dimensional centered ]s with ]s <math>\left\langle X_i X_j \right\rangle = M_{ij}</math>, <math>\left\langle Y_i Y_j \right\rangle = N_{ij}</math> and independent from each other so that we have
: <math>\langle X_i Y_j \rangle = 0</math> for any <math>i, j</math> : <math>\left\langle X_i Y_j \right\rangle = 0</math> for any <math>i, j</math>

Then the covariance matrix of <math>X_i Y_i</math> and <math>X_j Y_j</math> is Then the covariance matrix of <math>X_i Y_i</math> and <math>X_j Y_j</math> is
: <math>\operatorname{Cov}(X_i Y_i, X_j Y_j) = \langle X_i Y_i X_j Y_j \rangle - \langle X_i Y_i \rangle \langle X_j Y_j \rangle</math> : <math>\operatorname{Cov}\left(X_i Y_i, X_j Y_j\right) = \left\langle X_i Y_i X_j Y_j \right\rangle - \left\langle X_i Y_i \right\rangle \left\langle X_j Y_j \right\rangle</math>

Using ] to develop Using ] to develop
: <math>\langle X_i Y_i X_j Y_j \rangle = \langle X_i X_j \rangle \langle Y_i Y_j \rangle + \langle X_i Y_i \rangle \langle X_j Y_j \rangle + \langle X_i Y_j \rangle \langle X_j Y_i \rangle</math> : <math>\left\langle X_i Y_i X_j Y_j \right\rangle = \left\langle X_i X_j \right\rangle \left\langle Y_i Y_j \right\rangle + \left\langle X_i Y_i \right\rangle \left\langle X_j Y_j \right\rangle + \left\langle X_i Y_j \right\rangle \left\langle X_j Y_i \right\rangle</math>

and also using the independence of <math>X</math> and <math>Y</math>, we have and also using the independence of <math>X</math> and <math>Y</math>, we have
: <math>\operatorname{Cov}(X_i Y_i, X_j Y_j) = \langle X_i X_j \rangle \langle Y_i Y_j \rangle = M_{ij} N_{ij}</math> : <math>\operatorname{Cov}\left(X_i Y_i, X_j Y_j\right) = \left\langle X_i X_j \right\rangle \left\langle Y_i Y_j \right\rangle = M_{ij} N_{ij}</math>

Since a covariance matrix is positive definite, this proves that the matrix with elements <math>M_{ij} N_{ij}</math> is a positive definite matrix. Since a covariance matrix is positive definite, this proves that the matrix with elements <math>M_{ij} N_{ij}</math> is a positive definite matrix.


Line 46: Line 54:


Let <math>M = \sum \mu_i m_i m_i^T</math> and <math>N = \sum \nu_i n_i n_i^T</math>. Then Let <math>M = \sum \mu_i m_i m_i^T</math> and <math>N = \sum \nu_i n_i n_i^T</math>. Then
: <math>M \circ N = \sum_{ij} \mu_i \nu_j (m_i m_i^T) \circ (n_j n_j^T) = \sum_{ij} \mu_i \nu_j (m_i \circ n_j) (m_i \circ n_j)^T</math> : <math>M \circ N = \sum_{ij} \mu_i \nu_j \left(m_i m_i^T\right) \circ \left(n_j n_j^T\right) = \sum_{ij} \mu_i \nu_j \left(m_i \circ n_j\right) \left(m_i \circ n_j\right)^T</math>

Each <math>(m_i \circ n_j) (m_i \circ n_j)^T</math> is positive semidefinite (but, except in the 1-dimensional case, not positive definite, since they are ] 1 matrices). Also, <math>\mu_i \nu_j > 0</math> thus the sum <math>M \circ N</math> is also positive semidefinite. Each <math>\left(m_i \circ n_j\right) \left(m_i \circ n_j\right)^T</math> is positive semidefinite (but, except in the 1-dimensional case, not positive definite, since they are ] 1 matrices). Also, <math>\mu_i \nu_j > 0</math> thus the sum <math>M \circ N</math> is also positive semidefinite.


==== Proof of definiteness ==== ==== Proof of definiteness ====


To show that the result is positive definite requires further proof. We shall show that for any vector <math>a \neq 0</math>, we have <math>a^T (M \circ N) a > 0</math>. Continuing as above, each <math>a^T (m_i \circ n_j) (m_i \circ n_j)^T a \ge 0</math>, so it remains to show that there exist <math>i</math> and <math>j</math> for which the inequality is strict. For this we observe that To show that the result is positive definite requires further proof. We shall show that for any vector <math>a \neq 0</math>, we have <math>a^T (M \circ N) a > 0</math>. Continuing as above, each <math>a^T \left(m_i \circ n_j\right) \left(m_i \circ n_j\right)^T a \ge 0</math>, so it remains to show that there exist <math>i</math> and <math>j</math> for which the inequality is strict. For this we observe that

: <math>a^T (m_i \circ n_j) (m_i \circ n_j)^T a = \left(\sum_k m_{i,k} n_{j,k} a_k\right)^2</math> : <math>a^T (m_i \circ n_j) (m_i \circ n_j)^T a = \left(\sum_k m_{i,k} n_{j,k} a_k\right)^2</math>


Line 58: Line 66:


== References == == References ==

{{reflist}} {{reflist}}



Revision as of 01:49, 4 February 2019

In mathematics, particularly in linear algebra, the Schur product theorem states that the Hadamard product of two positive definite matrices is also a positive definite matrix. The result is named after Issai Schur (Schur 1911, p. 14, Theorem VII) (note that Schur signed as J. Schur in Journal für die reine und angewandte Mathematik.)

Proof

Proof using the trace formula

For any matrices M {\displaystyle M} and N {\displaystyle N} , the Hadamard product M N {\displaystyle M\circ N} considered as a bilinear form acts on vectors a , b {\displaystyle a,b} as

a ( M N ) b = tr ( M T diag ( a ) N diag ( b ) ) {\displaystyle a^{*}(M\circ N)b=\operatorname {tr} \left(M^{T}\operatorname {diag} \left(a^{*}\right)N\operatorname {diag} (b)\right)}

where tr {\displaystyle \operatorname {tr} } is the matrix trace and diag ( a ) {\displaystyle \operatorname {diag} (a)} is the diagonal matrix having as diagonal entries the elements of a {\displaystyle a} .

Suppose M {\displaystyle M} and N {\displaystyle N} are positive definite, and so Hermitian. We can consider their square-roots M 1 2 {\displaystyle M^{\frac {1}{2}}} and N 1 2 {\displaystyle N^{\frac {1}{2}}} , which are also Hermitian, and write

tr ( M T diag ( a ) N diag ( b ) ) = tr ( M ¯ 1 2 M ¯ 1 2 diag ( a ) N 1 2 N 1 2 diag ( b ) ) = tr ( M ¯ 1 2 diag ( a ) N 1 2 N 1 2 diag ( b ) M ¯ 1 2 ) {\displaystyle \operatorname {tr} \left(M^{T}\operatorname {diag} \left(a^{*}\right)N\operatorname {diag} (b)\right)=\operatorname {tr} \left({\overline {M}}^{\frac {1}{2}}{\overline {M}}^{\frac {1}{2}}\operatorname {diag} \left(a^{*}\right)N^{\frac {1}{2}}N^{\frac {1}{2}}\operatorname {diag} (b)\right)=\operatorname {tr} \left({\overline {M}}^{\frac {1}{2}}\operatorname {diag} \left(a^{*}\right)N^{\frac {1}{2}}N^{\frac {1}{2}}\operatorname {diag} (b){\overline {M}}^{\frac {1}{2}}\right)}

Then, for a = b {\displaystyle a=b} , this is written as tr ( A A ) {\displaystyle \operatorname {tr} \left(A^{*}A\right)} for A = N 1 2 diag ( a ) M ¯ 1 2 {\displaystyle A=N^{\frac {1}{2}}\operatorname {diag} (a){\overline {M}}^{\frac {1}{2}}} and thus is strictly positive for A 0 {\displaystyle A\neq 0} , which occurs if and only if a 0 {\displaystyle a\neq 0} . This shows that ( M N ) {\displaystyle (M\circ N)} is a positive definite matrix.

Proof using Gaussian integration

Case of M = N

Let X {\displaystyle X} be an n {\displaystyle n} -dimensional centered Gaussian random variable with covariance X i X j = M i j {\displaystyle \langle X_{i}X_{j}\rangle =M_{ij}} . Then the covariance matrix of X i 2 {\displaystyle X_{i}^{2}} and X j 2 {\displaystyle X_{j}^{2}} is

Cov ( X i 2 , X j 2 ) = X i 2 X j 2 X i 2 X j 2 {\displaystyle \operatorname {Cov} \left(X_{i}^{2},X_{j}^{2}\right)=\left\langle X_{i}^{2}X_{j}^{2}\right\rangle -\left\langle X_{i}^{2}\right\rangle \left\langle X_{j}^{2}\right\rangle }

Using Wick's theorem to develop X i 2 X j 2 = 2 X i X j 2 + X i 2 X j 2 {\displaystyle \left\langle X_{i}^{2}X_{j}^{2}\right\rangle =2\left\langle X_{i}X_{j}\right\rangle ^{2}+\left\langle X_{i}^{2}\right\rangle \left\langle X_{j}^{2}\right\rangle } we have

Cov ( X i 2 , X j 2 ) = 2 X i X j 2 = 2 M i j 2 {\displaystyle \operatorname {Cov} \left(X_{i}^{2},X_{j}^{2}\right)=2\left\langle X_{i}X_{j}\right\rangle ^{2}=2M_{ij}^{2}}

Since a covariance matrix is positive definite, this proves that the matrix with elements M i j 2 {\displaystyle M_{ij}^{2}} is a positive definite matrix.

General case

Let X {\displaystyle X} and Y {\displaystyle Y} be n {\displaystyle n} -dimensional centered Gaussian random variables with covariances X i X j = M i j {\displaystyle \left\langle X_{i}X_{j}\right\rangle =M_{ij}} , Y i Y j = N i j {\displaystyle \left\langle Y_{i}Y_{j}\right\rangle =N_{ij}} and independent from each other so that we have

X i Y j = 0 {\displaystyle \left\langle X_{i}Y_{j}\right\rangle =0} for any i , j {\displaystyle i,j}

Then the covariance matrix of X i Y i {\displaystyle X_{i}Y_{i}} and X j Y j {\displaystyle X_{j}Y_{j}} is

Cov ( X i Y i , X j Y j ) = X i Y i X j Y j X i Y i X j Y j {\displaystyle \operatorname {Cov} \left(X_{i}Y_{i},X_{j}Y_{j}\right)=\left\langle X_{i}Y_{i}X_{j}Y_{j}\right\rangle -\left\langle X_{i}Y_{i}\right\rangle \left\langle X_{j}Y_{j}\right\rangle }

Using Wick's theorem to develop

X i Y i X j Y j = X i X j Y i Y j + X i Y i X j Y j + X i Y j X j Y i {\displaystyle \left\langle X_{i}Y_{i}X_{j}Y_{j}\right\rangle =\left\langle X_{i}X_{j}\right\rangle \left\langle Y_{i}Y_{j}\right\rangle +\left\langle X_{i}Y_{i}\right\rangle \left\langle X_{j}Y_{j}\right\rangle +\left\langle X_{i}Y_{j}\right\rangle \left\langle X_{j}Y_{i}\right\rangle }

and also using the independence of X {\displaystyle X} and Y {\displaystyle Y} , we have

Cov ( X i Y i , X j Y j ) = X i X j Y i Y j = M i j N i j {\displaystyle \operatorname {Cov} \left(X_{i}Y_{i},X_{j}Y_{j}\right)=\left\langle X_{i}X_{j}\right\rangle \left\langle Y_{i}Y_{j}\right\rangle =M_{ij}N_{ij}}

Since a covariance matrix is positive definite, this proves that the matrix with elements M i j N i j {\displaystyle M_{ij}N_{ij}} is a positive definite matrix.

Proof using eigendecomposition

Proof of positive semidefiniteness

Let M = μ i m i m i T {\displaystyle M=\sum \mu _{i}m_{i}m_{i}^{T}} and N = ν i n i n i T {\displaystyle N=\sum \nu _{i}n_{i}n_{i}^{T}} . Then

M N = i j μ i ν j ( m i m i T ) ( n j n j T ) = i j μ i ν j ( m i n j ) ( m i n j ) T {\displaystyle M\circ N=\sum _{ij}\mu _{i}\nu _{j}\left(m_{i}m_{i}^{T}\right)\circ \left(n_{j}n_{j}^{T}\right)=\sum _{ij}\mu _{i}\nu _{j}\left(m_{i}\circ n_{j}\right)\left(m_{i}\circ n_{j}\right)^{T}}

Each ( m i n j ) ( m i n j ) T {\displaystyle \left(m_{i}\circ n_{j}\right)\left(m_{i}\circ n_{j}\right)^{T}} is positive semidefinite (but, except in the 1-dimensional case, not positive definite, since they are rank 1 matrices). Also, μ i ν j > 0 {\displaystyle \mu _{i}\nu _{j}>0} thus the sum M N {\displaystyle M\circ N} is also positive semidefinite.

Proof of definiteness

To show that the result is positive definite requires further proof. We shall show that for any vector a 0 {\displaystyle a\neq 0} , we have a T ( M N ) a > 0 {\displaystyle a^{T}(M\circ N)a>0} . Continuing as above, each a T ( m i n j ) ( m i n j ) T a 0 {\displaystyle a^{T}\left(m_{i}\circ n_{j}\right)\left(m_{i}\circ n_{j}\right)^{T}a\geq 0} , so it remains to show that there exist i {\displaystyle i} and j {\displaystyle j} for which the inequality is strict. For this we observe that

a T ( m i n j ) ( m i n j ) T a = ( k m i , k n j , k a k ) 2 {\displaystyle a^{T}(m_{i}\circ n_{j})(m_{i}\circ n_{j})^{T}a=\left(\sum _{k}m_{i,k}n_{j,k}a_{k}\right)^{2}}

Since N {\displaystyle N} is positive definite, there is a j {\displaystyle j} for which n j , k a k {\displaystyle n_{j,k}a_{k}} is not 0 for all k {\displaystyle k} , and then, since M {\displaystyle M} is positive definite, there is an i {\displaystyle i} for which m i , k n j , k a k {\displaystyle m_{i,k}n_{j,k}a_{k}} is not 0 for all k {\displaystyle k} . Then for this i {\displaystyle i} and j {\displaystyle j} we have ( k m i , k n j , k a k ) 2 > 0 {\displaystyle \left(\sum _{k}m_{i,k}n_{j,k}a_{k}\right)^{2}>0} . This completes the proof.

References

  1. "Bemerkungen zur Theorie der beschränkten Bilinearformen mit unendlich vielen Veränderlichen". Journal für die reine und angewandte Mathematik (Crelle's Journal). 1911 (140): 1–28. 1911. doi:10.1515/crll.1911.140.1.
  2. Zhang, Fuzhen, ed. (2005). "The Schur Complement and Its Applications". Numerical Methods and Algorithms. 4. doi:10.1007/b105056. ISBN 0-387-24271-6. {{cite journal}}: Cite journal requires |journal= (help), page 9, Ch. 0.6 Publication under J. Schur
  3. Ledermann, W. (1983). "Issai Schur and His School in Berlin". Bulletin of the London Mathematical Society. 15 (2): 97–106. doi:10.1112/blms/15.2.97.

External links

Categories: