Misplaced Pages

Self-adjoint operator

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Symmetric operator) Linear operator equal to its own adjoint

In mathematics, a self-adjoint operator on a complex vector space V with inner product , {\displaystyle \langle \cdot ,\cdot \rangle } is a linear map A (from V to itself) that is its own adjoint. That is, A x , y = x , A y {\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle } for all x , y {\displaystyle x,y} V. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observables such as position, momentum, angular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian operator H ^ {\displaystyle {\hat {H}}} defined by

H ^ ψ = 2 2 m 2 ψ + V ψ , {\displaystyle {\hat {H}}\psi =-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi +V\psi ,}

which as an observable corresponds to the total energy of a particle of mass m in a real potential field V. Differential operators are an important class of unbounded operators.

The structure of self-adjoint operators on infinite-dimensional Hilbert spaces essentially resembles the finite-dimensional case. That is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operators. With suitable modifications, this result can be extended to possibly unbounded operators on infinite-dimensional spaces. Since an everywhere-defined self-adjoint operator is necessarily bounded, one needs to be more attentive to the domain issue in the unbounded case. This is explained below in more detail.

Definitions

Let H {\displaystyle H} be a Hilbert space and A {\displaystyle A} an unbounded (i.e. not necessarily bounded) operator with a dense domain Dom A H . {\displaystyle \operatorname {Dom} A\subseteq H.} This condition holds automatically when H {\displaystyle H} is finite-dimensional since Dom A = H {\displaystyle \operatorname {Dom} A=H} for every linear operator on a finite-dimensional space.

The graph of an (arbitrary) operator A {\displaystyle A} is the set G ( A ) = { ( x , A x ) x Dom A } . {\displaystyle G(A)=\{(x,Ax)\mid x\in \operatorname {Dom} A\}.} An operator B {\displaystyle B} is said to extend A {\displaystyle A} if G ( A ) G ( B ) . {\displaystyle G(A)\subseteq G(B).} This is written as A B . {\displaystyle A\subseteq B.}

Let the inner product , {\displaystyle \langle \cdot ,\cdot \rangle } be conjugate linear on the second argument. The adjoint operator A {\displaystyle A^{*}} acts on the subspace Dom A H {\displaystyle \operatorname {Dom} A^{*}\subseteq H} consisting of the elements y {\displaystyle y} such that

A x , y = x , A y , x Dom A . {\displaystyle \langle Ax,y\rangle =\langle x,A^{*}y\rangle ,\quad \forall x\in \operatorname {Dom} A.}

The densely defined operator A {\displaystyle A} is called symmetric (or Hermitian) if A A {\displaystyle A\subseteq A^{*}} , i.e., if Dom A Dom A {\displaystyle \operatorname {Dom} A\subseteq \operatorname {Dom} A^{*}} and A x = A x {\displaystyle Ax=A^{*}x} for all x Dom A {\displaystyle x\in \operatorname {Dom} A} . Equivalently, A {\displaystyle A} is symmetric if and only if

A x , y = x , A y , x , y Dom A . {\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle ,\quad \forall x,y\in \operatorname {Dom} A.}

Since Dom A Dom A {\displaystyle \operatorname {Dom} A^{*}\supseteq \operatorname {Dom} A} is dense in H {\displaystyle H} , symmetric operators are always closable (i.e. the closure of G ( A ) {\displaystyle G(A)} is the graph of an operator). If A {\displaystyle A^{*}} is a closed extension of A {\displaystyle A} , the smallest closed extension A {\displaystyle A^{**}} of A {\displaystyle A} must be contained in A {\displaystyle A^{*}} . Hence,

A A A {\displaystyle A\subseteq A^{**}\subseteq A^{*}}

for symmetric operators and

A = A A {\displaystyle A=A^{**}\subseteq A^{*}}

for closed symmetric operators.

The densely defined operator A {\displaystyle A} is called self-adjoint if A = A {\displaystyle A=A^{*}} , that is, if and only if A {\displaystyle A} is symmetric and Dom A = Dom A {\displaystyle \operatorname {Dom} A=\operatorname {Dom} A^{*}} . Equivalently, a closed symmetric operator A {\displaystyle A} is self-adjoint if and only if A {\displaystyle A^{*}} is symmetric. If A {\displaystyle A} is self-adjoint, then x , A x {\displaystyle \left\langle x,Ax\right\rangle } is real for all x H {\displaystyle x\in H} , i.e.,

x , A x = A x , x ¯ = x , A x ¯ R , x H . {\displaystyle \langle x,Ax\rangle ={\overline {\langle Ax,x\rangle }}={\overline {\langle x,Ax\rangle }}\in \mathbb {R} ,\quad \forall x\in H.}

A symmetric operator A {\displaystyle A} is said to be essentially self-adjoint if the closure of A {\displaystyle A} is self-adjoint. Equivalently, A {\displaystyle A} is essentially self-adjoint if it has a unique self-adjoint extension. In practical terms, having an essentially self-adjoint operator is almost as good as having a self-adjoint operator, since we merely need to take the closure to obtain a self-adjoint operator.

In physics, the term Hermitian refers to symmetric as well as self-adjoint operators alike. The subtle difference between the two is generally overlooked.

Bounded self-adjoint operators

Let H {\displaystyle H} be a Hilbert space and A : Dom ( A ) H {\displaystyle A:\operatorname {Dom} (A)\to H} a symmetric operator. According to Hellinger–Toeplitz theorem, if Dom ( A ) = H {\displaystyle \operatorname {Dom} (A)=H} then A {\displaystyle A} is necessarily bounded. A bounded operator A : H H {\displaystyle A:H\to H} is self-adjoint if

A x , y = x , A y , x , y H . {\displaystyle \langle Ax,y\rangle =\langle x,Ay\rangle ,\quad \forall x,y\in H.}

Every bounded operator T : H H {\displaystyle T:H\to H} can be written in the complex form T = A + i B {\displaystyle T=A+iB} where A : H H {\displaystyle A:H\to H} and B : H H {\displaystyle B:H\to H} are bounded self-adjoint operators.

Alternatively, every positive bounded linear operator A : H H {\displaystyle A:H\to H} is self-adjoint if the Hilbert space H {\displaystyle H} is complex.

Properties

A bounded self-adjoint operator A : H H {\displaystyle A:H\to H} defined on Dom ( A ) = H {\displaystyle \operatorname {Dom} \left(A\right)=H} has the following properties:

  • A : H Im A H {\displaystyle A:H\to \operatorname {Im} A\subseteq H} is invertible if the image of A {\displaystyle A} is dense in H . {\displaystyle H.}
  • The operator norm is given by A = sup { | x , A x | : x = 1 } {\displaystyle \left\|A\right\|=\sup \left\{|\langle x,Ax\rangle |:\|x\|=1\right\}}
  • If λ {\displaystyle \lambda } is an eigenvalue of A {\displaystyle A} then | λ | sup { | x , A x | : x 1 } {\displaystyle |\lambda |\leq \sup \left\{|\langle x,Ax\rangle |:\|x\|\leq 1\right\}} ; the eigenvalues are real and the corresponding eigenvectors are orthogonal.

Bounded self-adjoint operators do not necessarily have an eigenvalue. If, however, A {\displaystyle A} is a compact self-adjoint operator then it always has an eigenvalue | λ | = A {\displaystyle |\lambda |=\|A\|} and corresponding normalized eigenvector.

Spectrum of self-adjoint operators

See also: Spectrum (functional analysis)

Let A : Dom ( A ) H {\displaystyle A:\operatorname {Dom} (A)\to H} be an unbounded operator. The resolvent set (or regular set) of A {\displaystyle A} is defined as

ρ ( A ) = { λ C : ( A λ I ) 1 bounded and densely defined } . {\displaystyle \rho (A)=\left\{\lambda \in \mathbb {C} \,:\,\exists (A-\lambda I)^{-1}\;{\text{bounded and densely defined}}\right\}.}

If A {\displaystyle A} is bounded, the definition reduces to A λ I {\displaystyle A-\lambda I} being bijective on H {\displaystyle H} . The spectrum of A {\displaystyle A} is defined as the complement

σ ( A ) = C ρ ( A ) . {\displaystyle \sigma (A)=\mathbb {C} \setminus \rho (A).}

In finite dimensions, σ ( A ) C {\displaystyle \sigma (A)\subseteq \mathbb {C} } consists exclusively of (complex) eigenvalues. The spectrum of a self-adjoint operator is always real (i.e. σ ( A ) R {\displaystyle \sigma (A)\subseteq \mathbb {R} } ), though non-self-adjoint operators with real spectrum exist as well. For bounded (normal) operators, however, the spectrum is real if and only if the operator is self-adjoint. This implies, for example, that a non-self-adjoint operator with real spectrum is necessarily unbounded.

As a preliminary, define S = { x Dom A x = 1 } , {\displaystyle S=\{x\in \operatorname {Dom} A\mid \Vert x\Vert =1\},} m = inf x S A x , x {\displaystyle \textstyle m=\inf _{x\in S}\langle Ax,x\rangle } and M = sup x S A x , x {\displaystyle \textstyle M=\sup _{x\in S}\langle Ax,x\rangle } with m , M R { ± } {\displaystyle m,M\in \mathbb {R} \cup \{\pm \infty \}} . Then, for every λ C {\displaystyle \lambda \in \mathbb {C} } and every x Dom A , {\displaystyle x\in \operatorname {Dom} A,}

( A λ ) x d ( λ ) x , {\displaystyle \Vert (A-\lambda )x\Vert \geq d(\lambda )\cdot \Vert x\Vert ,}

where d ( λ ) = inf r [ m , M ] | r λ | . {\displaystyle \textstyle d(\lambda )=\inf _{r\in }|r-\lambda |.}

Indeed, let x Dom A { 0 } . {\displaystyle x\in \operatorname {Dom} A\setminus \{0\}.} By the Cauchy–Schwarz inequality,

( A λ ) x | ( A λ ) x , x | x = | A x x , x x λ | x d ( λ ) x . {\displaystyle \Vert (A-\lambda )x\Vert \geq {\frac {|\langle (A-\lambda )x,x\rangle |}{\Vert x\Vert }}=\left|\left\langle A{\frac {x}{\Vert x\Vert }},{\frac {x}{\Vert x\Vert }}\right\rangle -\lambda \right|\cdot \Vert x\Vert \geq d(\lambda )\cdot \Vert x\Vert .}

If λ [ m , M ] , {\displaystyle \lambda \notin ,} then d ( λ ) > 0 , {\displaystyle d(\lambda )>0,} and A λ I {\displaystyle A-\lambda I} is called bounded below.

Theorem — Self-adjoint operator has real spectrum

Proof

Let A {\displaystyle A} be self-adjoint and denote R λ = A λ I {\displaystyle R_{\lambda }=A-\lambda I} with λ C . {\displaystyle \lambda \in \mathbb {C} .} It suffices to prove that σ ( A ) [ m , M ] . {\displaystyle \sigma (A)\subseteq .}

  1. Let λ C [ m , M ] . {\displaystyle \lambda \in \mathbb {C} \setminus .} The goal is to prove the existence and boundedness of R λ 1 , {\displaystyle R_{\lambda }^{-1},} and show that Dom R λ 1 = H . {\displaystyle \operatorname {Dom} R_{\lambda }^{-1}=H.} We begin by showing that ker R λ = { 0 } {\displaystyle \ker R_{\lambda }=\{0\}} and Im R λ = H . {\displaystyle \operatorname {Im} R_{\lambda }=H.}
    1. As shown above, R λ {\displaystyle R_{\lambda }} is bounded below, i.e. R λ x d ( λ ) x , {\displaystyle \Vert R_{\lambda }x\Vert \geq d(\lambda )\cdot \Vert x\Vert ,} with d ( λ ) > 0. {\displaystyle d(\lambda )>0.} The triviality of ker R λ {\displaystyle \ker R_{\lambda }} follows.
    2. It remains to show that Im R λ = H . {\displaystyle \operatorname {Im} R_{\lambda }=H.} Indeed,
      1. Im R λ {\displaystyle \operatorname {Im} R_{\lambda }} is closed. To prove this, pick a sequence y n = R λ x n Im R λ {\displaystyle y_{n}=R_{\lambda }x_{n}\in \operatorname {Im} R_{\lambda }} converging to some y H . {\displaystyle y\in H.} Since x n x m 1 d ( λ ) y n y m , {\displaystyle \|x_{n}-x_{m}\|\leq {\frac {1}{d(\lambda )}}\|y_{n}-y_{m}\|,} x n {\displaystyle x_{n}} is fundamental. Hence, it converges to some x H . {\displaystyle x\in H.} Furthermore, y n + λ x n = A x n {\displaystyle y_{n}+\lambda x_{n}=Ax_{n}} and y n + λ x n y + λ x . {\displaystyle y_{n}+\lambda x_{n}\to y+\lambda x.} The arguments made thus far hold for any symmetric operator. It now follows from self-adjointness that A {\displaystyle A} is closed, so x Dom A = Dom R λ , {\displaystyle x\in \operatorname {Dom} A=\operatorname {Dom} R_{\lambda },} A x = y + λ x Im A , {\displaystyle Ax=y+\lambda x\in \operatorname {Im} A,} and consequently y = R λ x Im R λ . {\displaystyle y=R_{\lambda }x\in \operatorname {Im} R_{\lambda }.}
      2. Im R λ {\displaystyle \operatorname {Im} R_{\lambda }} is dense in H . {\displaystyle H.} The self-adjointness of A {\displaystyle A} (i.e. A = A {\displaystyle A^{*}=A} ) implies R λ = R λ ¯ {\displaystyle R_{\lambda }^{*}=R_{\bar {\lambda }}} and thus ( Im R λ ) = ker R λ ¯ {\displaystyle \left(\operatorname {Im} R_{\lambda }\right)^{\perp }=\ker R_{\bar {\lambda }}} . The subsequent inclusion λ ¯ C [ m , M ] {\displaystyle {\bar {\lambda }}\in \mathbb {C} \setminus } implies d ( λ ¯ ) > 0 {\displaystyle d({\bar {\lambda }})>0} and, consequently, ker R λ ¯ = { 0 } . {\displaystyle \ker R_{\bar {\lambda }}=\{0\}.}
  2. The operator R λ : Dom A H {\displaystyle R_{\lambda }\colon \operatorname {Dom} A\to H} has now been proven to be bijective, so R λ 1 {\displaystyle R_{\lambda }^{-1}} exists and is everywhere defined. The graph of R λ 1 {\displaystyle R_{\lambda }^{-1}} is the set { ( R λ x , x ) x Dom A } . {\displaystyle \{(R_{\lambda }x,x)\mid x\in \operatorname {Dom} A\}.} Since R λ {\displaystyle R_{\lambda }} is closed (because A {\displaystyle A} is), so is R λ 1 . {\displaystyle R_{\lambda }^{-1}.} By closed graph theorem, R λ 1 {\displaystyle R_{\lambda }^{-1}} is bounded, so λ σ ( A ) . {\displaystyle \lambda \notin \sigma (A).}

Theorem — Symmetric operator with real spectrum is self-adjoint

Proof
  1. A {\displaystyle A} is symmetric; therefore A A {\displaystyle A\subseteq A^{*}} and A λ I A λ I {\displaystyle A-\lambda I\subseteq A^{*}-\lambda I} for every λ C {\displaystyle \lambda \in \mathbb {C} } . Let σ ( A ) [ m , M ] . {\displaystyle \sigma (A)\subseteq .} If λ [ m , M ] {\displaystyle \lambda \notin } then λ ¯ [ m , M ] {\displaystyle {\bar {\lambda }}\notin } and the operators { A λ I , A λ ¯ I } : Dom A H {\displaystyle \{A-\lambda I,A-{\bar {\lambda }}I\}:\operatorname {Dom} A\to H} are both bijective.
  2. A λ I = A λ I . {\displaystyle A-\lambda I=A^{*}-\lambda I.} Indeed, H = Im ( A λ I ) Im ( A λ I ) {\displaystyle H=\operatorname {Im} (A-\lambda I)\subseteq \operatorname {Im} (A^{*}-\lambda I)} . That is, if Dom ( A λ I ) Dom ( A λ I ) {\displaystyle \operatorname {Dom} (A-\lambda I)\subsetneq \operatorname {Dom} (A^{*}-\lambda I)} then A λ I {\displaystyle A^{*}-\lambda I} would not be injective (i.e. ker ( A λ I ) { 0 } {\displaystyle \ker(A^{*}-\lambda I)\neq \{0\}} ). But Im ( A λ ¯ I ) = ker ( A λ I ) {\displaystyle \operatorname {Im} (A-{\bar {\lambda }}I)^{\perp }=\ker(A^{*}-\lambda I)} and, hence, Im ( A λ ¯ I ) H . {\displaystyle \operatorname {Im} (A-{\bar {\lambda }}I)\neq H.} This contradicts the bijectiveness.
  3. The equality A λ I = A λ I {\displaystyle A-\lambda I=A^{*}-\lambda I} shows that A = A , {\displaystyle A=A^{*},} i.e. A {\displaystyle A} is self-adjoint. Indeed, it suffices to prove that A A . {\displaystyle A^{*}\subseteq A.} For every x Dom A {\displaystyle x\in \operatorname {Dom} A^{*}} and y = A x , {\displaystyle y=A^{*}x,} A x = y ( A λ I ) x = y λ x ( A λ I ) x = y λ x A x = y . {\displaystyle A^{*}x=y\Leftrightarrow (A^{*}-\lambda I)x=y-\lambda x\Leftrightarrow (A-\lambda I)x=y-\lambda x\Leftrightarrow Ax=y.}

Spectral theorem

Main article: Spectral theorem

In the physics literature, the spectral theorem is often stated by saying that a self-adjoint operator has an orthonormal basis of eigenvectors. Physicists are well aware, however, of the phenomenon of "continuous spectrum"; thus, when they speak of an "orthonormal basis" they mean either an orthonormal basis in the classic sense or some continuous analog thereof. In the case of the momentum operator P = i d d x {\textstyle P=-i{\frac {d}{dx}}} , for example, physicists would say that the eigenvectors are the functions f p ( x ) := e i p x {\displaystyle f_{p}(x):=e^{ipx}} , which are clearly not in the Hilbert space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . (Physicists would say that the eigenvectors are "non-normalizable.") Physicists would then go on to say that these "generalized eigenvectors" form an "orthonormal basis in the continuous sense" for L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} , after replacing the usual Kronecker delta δ i , j {\displaystyle \delta _{i,j}} by a Dirac delta function δ ( p p ) {\displaystyle \delta \left(p-p'\right)} .

Although these statements may seem disconcerting to mathematicians, they can be made rigorous by use of the Fourier transform, which allows a general L 2 {\displaystyle L^{2}} function to be expressed as a "superposition" (i.e., integral) of the functions e i p x {\displaystyle e^{ipx}} , even though these functions are not in L 2 {\displaystyle L^{2}} . The Fourier transform "diagonalizes" the momentum operator; that is, it converts it into the operator of multiplication by p {\displaystyle p} , where p {\displaystyle p} is the variable of the Fourier transform.

The spectral theorem in general can be expressed similarly as the possibility of "diagonalizing" an operator by showing it is unitarily equivalent to a multiplication operator. Other versions of the spectral theorem are similarly intended to capture the idea that a self-adjoint operator can have "eigenvectors" that are not actually in the Hilbert space in question.

Multiplication operator form of the spectral theorem

Firstly, let ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} be a σ-finite measure space and h : X R {\displaystyle h:X\to \mathbb {R} } a measurable function on X {\displaystyle X} . Then the operator T h : Dom T h L 2 ( X , μ ) {\displaystyle T_{h}:\operatorname {Dom} T_{h}\to L^{2}(X,\mu )} , defined by

T h ψ ( x ) = h ( x ) ψ ( x ) , ψ Dom T h , {\displaystyle T_{h}\psi (x)=h(x)\psi (x),\quad \forall \psi \in \operatorname {Dom} T_{h},}

where

Dom T h := { ψ L 2 ( X , μ ) | h ψ L 2 ( X , μ ) } , {\displaystyle \operatorname {Dom} T_{h}:=\left\{\psi \in L^{2}(X,\mu )\;|\;h\psi \in L^{2}(X,\mu )\right\},}

is called a multiplication operator. Any multiplication operator is a self-adjoint operator.

Secondly, two operators A {\displaystyle A} and B {\displaystyle B} with dense domains Dom A H 1 {\displaystyle \operatorname {Dom} A\subseteq H_{1}} and Dom B H 2 {\displaystyle \operatorname {Dom} B\subseteq H_{2}} in Hilbert spaces H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} , respectively, are unitarily equivalent if and only if there is a unitary transformation U : H 1 H 2 {\displaystyle U:H_{1}\to H_{2}} such that:

  • U Dom A = Dom B , {\displaystyle U\operatorname {Dom} A=\operatorname {Dom} B,}
  • U A U 1 ξ = B ξ , ξ Dom B . {\displaystyle UAU^{-1}\xi =B\xi ,\quad \forall \xi \in \operatorname {Dom} B.}

If unitarily equivalent A {\displaystyle A} and B {\displaystyle B} are bounded, then A H 1 = B H 2 {\displaystyle \|A\|_{H_{1}}=\|B\|_{H_{2}}} ; if A {\displaystyle A} is self-adjoint, then so is B {\displaystyle B} .

Theorem — Any self-adjoint operator A {\displaystyle A} on a separable Hilbert space is unitarily equivalent to a multiplication operator, i.e.,

U A U 1 ψ ( x ) = h ( x ) ψ ( x ) , ψ U Dom ( A ) {\displaystyle UAU^{-1}\psi (x)=h(x)\psi (x),\quad \forall \psi \in U\operatorname {Dom} (A)}

The spectral theorem holds for both bounded and unbounded self-adjoint operators. Proof of the latter follows by reduction to the spectral theorem for unitary operators. We might note that if T {\displaystyle T} is multiplication by h {\displaystyle h} , then the spectrum of T {\displaystyle T} is just the essential range of h {\displaystyle h} .

More complete versions of the spectral theorem exist as well that involve direct integrals and carry with it the notion of "generalized eigenvectors".

Functional calculus

One application of the spectral theorem is to define a functional calculus. That is, if f {\displaystyle f} is a function on the real line and T {\displaystyle T} is a self-adjoint operator, we wish to define the operator f ( T ) {\displaystyle f(T)} . The spectral theorem shows that if T {\displaystyle T} is represented as the operator of multiplication by h {\displaystyle h} , then f ( T ) {\displaystyle f(T)} is the operator of multiplication by the composition f h {\displaystyle f\circ h} .

One example from quantum mechanics is the case where T {\displaystyle T} is the Hamiltonian operator H ^ {\displaystyle {\hat {H}}} . If H ^ {\displaystyle {\hat {H}}} has a true orthonormal basis of eigenvectors e j {\displaystyle e_{j}} with eigenvalues λ j {\displaystyle \lambda _{j}} , then f ( H ^ ) := e i t H ^ / {\displaystyle f({\hat {H}}):=e^{-it{\hat {H}}/\hbar }} can be defined as the unique bounded operator with eigenvalues f ( λ j ) := e i t λ j / {\displaystyle f(\lambda _{j}):=e^{-it\lambda _{j}/\hbar }} such that:

f ( H ^ ) e j = f ( λ j ) e j . {\displaystyle f({\hat {H}})e_{j}=f(\lambda _{j})e_{j}.}

The goal of functional calculus is to extend this idea to the case where T {\displaystyle T} has continuous spectrum (i.e. where T {\displaystyle T} has no normalizable eigenvectors).

It has been customary to introduce the following notation

E ( λ ) = 1 ( , λ ] ( T ) {\displaystyle \operatorname {E} (\lambda )=\mathbf {1} _{(-\infty ,\lambda ]}(T)}

where 1 ( , λ ] {\displaystyle \mathbf {1} _{(-\infty ,\lambda ]}} is the indicator function of the interval ( , λ ] {\displaystyle (-\infty ,\lambda ]} . The family of projection operators E(λ) is called resolution of the identity for T. Moreover, the following Stieltjes integral representation for T can be proved:

T = + λ d E ( λ ) . {\displaystyle T=\int _{-\infty }^{+\infty }\lambda d\operatorname {E} (\lambda ).}

Formulation in the physics literature

In quantum mechanics, Dirac notation is used as combined expression for both the spectral theorem and the Borel functional calculus. That is, if H is self-adjoint and f is a Borel function,

f ( H ) = d E | Ψ E f ( E ) Ψ E | {\displaystyle f(H)=\int dE\left|\Psi _{E}\rangle f(E)\langle \Psi _{E}\right|}

with

H | Ψ E = E | Ψ E {\displaystyle H\left|\Psi _{E}\right\rangle =E\left|\Psi _{E}\right\rangle }

where the integral runs over the whole spectrum of H. The notation suggests that H is diagonalized by the eigenvectors ΨE. Such a notation is purely formal. The resolution of the identity (sometimes called projection-valued measures) formally resembles the rank-1 projections | Ψ E Ψ E | {\displaystyle \left|\Psi _{E}\right\rangle \left\langle \Psi _{E}\right|} . In the Dirac notation, (projective) measurements are described via eigenvalues and eigenstates, both purely formal objects. As one would expect, this does not survive passage to the resolution of the identity. In the latter formulation, measurements are described using the spectral measure of | Ψ {\displaystyle |\Psi \rangle } , if the system is prepared in | Ψ {\displaystyle |\Psi \rangle } prior to the measurement. Alternatively, if one would like to preserve the notion of eigenstates and make it rigorous, rather than merely formal, one can replace the state space by a suitable rigged Hilbert space.

If f = 1, the theorem is referred to as resolution of unity:

I = d E | Ψ E Ψ E | {\displaystyle I=\int dE\left|\Psi _{E}\right\rangle \left\langle \Psi _{E}\right|}

In the case H eff = H i Γ {\displaystyle H_{\text{eff}}=H-i\Gamma } is the sum of an Hermitian H and a skew-Hermitian (see skew-Hermitian matrix) operator i Γ {\displaystyle -i\Gamma } , one defines the biorthogonal basis set

H eff | Ψ E = E | Ψ E {\displaystyle H_{\text{eff}}^{*}\left|\Psi _{E}^{*}\right\rangle =E^{*}\left|\Psi _{E}^{*}\right\rangle }

and write the spectral theorem as:

f ( H eff ) = d E | Ψ E f ( E ) Ψ E | {\displaystyle f\left(H_{\text{eff}}\right)=\int dE\left|\Psi _{E}\right\rangle f(E)\left\langle \Psi _{E}^{*}\right|}

(See Feshbach–Fano partitioning for the context where such operators appear in scattering theory).

Formulation for symmetric operators

The spectral theorem applies only to self-adjoint operators, and not in general to symmetric operators. Nevertheless, we can at this point give a simple example of a symmetric (specifically, an essentially self-adjoint) operator that has an orthonormal basis of eigenvectors. Consider the complex Hilbert space L and the differential operator

A = d 2 d x 2 {\displaystyle A=-{\frac {d^{2}}{dx^{2}}}}

with D o m ( A ) {\displaystyle \mathrm {Dom} (A)} consisting of all complex-valued infinitely differentiable functions f on satisfying the boundary conditions

f ( 0 ) = f ( 1 ) = 0. {\displaystyle f(0)=f(1)=0.}

Then integration by parts of the inner product shows that A is symmetric. The eigenfunctions of A are the sinusoids

f n ( x ) = sin ( n π x ) n = 1 , 2 , {\displaystyle f_{n}(x)=\sin(n\pi x)\qquad n=1,2,\ldots }

with the real eigenvalues nπ; the well-known orthogonality of the sine functions follows as a consequence of A being symmetric.

The operator A can be seen to have a compact inverse, meaning that the corresponding differential equation Af = g is solved by some integral (and therefore compact) operator G. The compact symmetric operator G then has a countable family of eigenvectors which are complete in L. The same can then be said for A.

Pure point spectrum

Not to be confused with Discrete spectrum (mathematics).

A self-adjoint operator A on H has pure point spectrum if and only if H has an orthonormal basis {ei}i ∈ I consisting of eigenvectors for A.

Example. The Hamiltonian for the harmonic oscillator has a quadratic potential V, that is

Δ + | x | 2 . {\displaystyle -\Delta +|x|^{2}.}

This Hamiltonian has pure point spectrum; this is typical for bound state Hamiltonians in quantum mechanics. As was pointed out in a previous example, a sufficient condition that an unbounded symmetric operator has eigenvectors which form a Hilbert space basis is that it has a compact inverse.

Symmetric vs self-adjoint operators

See also: Extensions of symmetric operators

Although the distinction between a symmetric operator and a (essentially) self-adjoint operator is subtle, it is important since self-adjointness is the hypothesis in the spectral theorem. Here we discuss some concrete examples of the distinction.

Boundary conditions

In the case where the Hilbert space is a space of functions on a bounded domain, these distinctions have to do with a familiar issue in quantum physics: One cannot define an operator—such as the momentum or Hamiltonian operator—on a bounded domain without specifying boundary conditions. In mathematical terms, choosing the boundary conditions amounts to choosing an appropriate domain for the operator. Consider, for example, the Hilbert space L 2 ( [ 0 , 1 ] ) {\displaystyle L^{2}()} (the space of square-integrable functions on the interval ). Let us define a momentum operator A on this space by the usual formula, setting the Planck constant to 1:

A f = i d f d x . {\displaystyle Af=-i{\frac {df}{dx}}.}

We must now specify a domain for A, which amounts to choosing boundary conditions. If we choose

Dom ( A ) = { smooth functions } , {\displaystyle \operatorname {Dom} (A)=\left\{{\text{smooth functions}}\right\},}

then A is not symmetric (because the boundary terms in the integration by parts do not vanish).

If we choose

Dom ( A ) = { smooth functions f f ( 0 ) = f ( 1 ) = 0 } , {\displaystyle \operatorname {Dom} (A)=\left\{{\text{smooth functions}}\,f\mid f(0)=f(1)=0\right\},}

then using integration by parts, one can easily verify that A is symmetric. This operator is not essentially self-adjoint, however, basically because we have specified too many boundary conditions on the domain of A, which makes the domain of the adjoint too big (see also the example below).

Specifically, with the above choice of domain for A, the domain of the closure A c l {\displaystyle A^{\mathrm {cl} }} of A is

Dom ( A c l ) = { functions  f  with two derivatives in  L 2 f ( 0 ) = f ( 1 ) = 0 } , {\displaystyle \operatorname {Dom} \left(A^{\mathrm {cl} }\right)=\left\{{\text{functions }}f{\text{ with two derivatives in }}L^{2}\mid f(0)=f(1)=0\right\},}

whereas the domain of the adjoint A {\displaystyle A^{*}} of A is

Dom ( A ) = { functions  f  with two derivatives in  L 2 } . {\displaystyle \operatorname {Dom} \left(A^{*}\right)=\left\{{\text{functions }}f{\text{ with two derivatives in }}L^{2}\right\}.}

That is to say, the domain of the closure has the same boundary conditions as the domain of A itself, just a less stringent smoothness assumption. Meanwhile, since there are "too many" boundary conditions on A, there are "too few" (actually, none at all in this case) for A {\displaystyle A^{*}} . If we compute g , A f {\displaystyle \langle g,Af\rangle } for f Dom ( A ) {\displaystyle f\in \operatorname {Dom} (A)} using integration by parts, then since f {\displaystyle f} vanishes at both ends of the interval, no boundary conditions on g {\displaystyle g} are needed to cancel out the boundary terms in the integration by parts. Thus, any sufficiently smooth function g {\displaystyle g} is in the domain of A {\displaystyle A^{*}} , with A g = i d g / d x {\displaystyle A^{*}g=-i\,dg/dx} .

Since the domain of the closure and the domain of the adjoint do not agree, A is not essentially self-adjoint. After all, a general result says that the domain of the adjoint of A c l {\displaystyle A^{\mathrm {cl} }} is the same as the domain of the adjoint of A. Thus, in this case, the domain of the adjoint of A c l {\displaystyle A^{\mathrm {cl} }} is bigger than the domain of A c l {\displaystyle A^{\mathrm {cl} }} itself, showing that A c l {\displaystyle A^{\mathrm {cl} }} is not self-adjoint, which by definition means that A is not essentially self-adjoint.

The problem with the preceding example is that we imposed too many boundary conditions on the domain of A. A better choice of domain would be to use periodic boundary conditions:

Dom ( A ) = { smooth functions f f ( 0 ) = f ( 1 ) } . {\displaystyle \operatorname {Dom} (A)=\{{\text{smooth functions}}\,f\mid f(0)=f(1)\}.}

With this domain, A is essentially self-adjoint.

In this case, we can understand the implications of the domain issues for the spectral theorem. If we use the first choice of domain (with no boundary conditions), all functions f β ( x ) = e β x {\displaystyle f_{\beta }(x)=e^{\beta x}} for β C {\displaystyle \beta \in \mathbb {C} } are eigenvectors, with eigenvalues i β {\displaystyle -i\beta } , and so the spectrum is the whole complex plane. If we use the second choice of domain (with Dirichlet boundary conditions), A has no eigenvectors at all. If we use the third choice of domain (with periodic boundary conditions), we can find an orthonormal basis of eigenvectors for A, the functions f n ( x ) := e 2 π i n x {\displaystyle f_{n}(x):=e^{2\pi inx}} . Thus, in this case finding a domain such that A is self-adjoint is a compromise: the domain has to be small enough so that A is symmetric, but large enough so that D ( A ) = D ( A ) {\displaystyle D(A^{*})=D(A)} .

Schrödinger operators with singular potentials

A more subtle example of the distinction between symmetric and (essentially) self-adjoint operators comes from Schrödinger operators in quantum mechanics. If the potential energy is singular—particularly if the potential is unbounded below—the associated Schrödinger operator may fail to be essentially self-adjoint. In one dimension, for example, the operator

H ^ := P 2 2 m X 4 {\displaystyle {\hat {H}}:={\frac {P^{2}}{2m}}-X^{4}}

is not essentially self-adjoint on the space of smooth, rapidly decaying functions. In this case, the failure of essential self-adjointness reflects a pathology in the underlying classical system: A classical particle with a x 4 {\displaystyle -x^{4}} potential escapes to infinity in finite time. This operator does not have a unique self-adjoint, but it does admit self-adjoint extensions obtained by specifying "boundary conditions at infinity". (Since H ^ {\displaystyle {\hat {H}}} is a real operator, it commutes with complex conjugation. Thus, the deficiency indices are automatically equal, which is the condition for having a self-adjoint extension.)

In this case, if we initially define H ^ {\displaystyle {\hat {H}}} on the space of smooth, rapidly decaying functions, the adjoint will be "the same" operator (i.e., given by the same formula) but on the largest possible domain, namely

Dom ( H ^ ) = { twice differentiable functions  f L 2 ( R ) | ( 2 2 m d 2 f d x 2 x 4 f ( x ) ) L 2 ( R ) } . {\displaystyle \operatorname {Dom} \left({\hat {H}}^{*}\right)=\left\{{\text{twice differentiable functions }}f\in L^{2}(\mathbb {R} )\left|\left(-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}f}{dx^{2}}}-x^{4}f(x)\right)\in L^{2}(\mathbb {R} )\right.\right\}.}

It is then possible to show that H ^ {\displaystyle {\hat {H}}^{*}} is not a symmetric operator, which certainly implies that H ^ {\displaystyle {\hat {H}}} is not essentially self-adjoint. Indeed, H ^ {\displaystyle {\hat {H}}^{*}} has eigenvectors with pure imaginary eigenvalues, which is impossible for a symmetric operator. This strange occurrence is possible because of a cancellation between the two terms in H ^ {\displaystyle {\hat {H}}^{*}} : There are functions f {\displaystyle f} in the domain of H ^ {\displaystyle {\hat {H}}^{*}} for which neither d 2 f / d x 2 {\displaystyle d^{2}f/dx^{2}} nor x 4 f ( x ) {\displaystyle x^{4}f(x)} is separately in L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} , but the combination of them occurring in H ^ {\displaystyle {\hat {H}}^{*}} is in L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . This allows for H ^ {\displaystyle {\hat {H}}^{*}} to be nonsymmetric, even though both d 2 / d x 2 {\displaystyle d^{2}/dx^{2}} and X 4 {\displaystyle X^{4}} are symmetric operators. This sort of cancellation does not occur if we replace the repelling potential x 4 {\displaystyle -x^{4}} with the confining potential x 4 {\displaystyle x^{4}} .

Non-self-adjoint operators in quantum mechanics

See also: Non-Hermitian quantum mechanics

In quantum mechanics, observables correspond to self-adjoint operators. By Stone's theorem on one-parameter unitary groups, self-adjoint operators are precisely the infinitesimal generators of unitary groups of time evolution operators. However, many physical problems are formulated as a time-evolution equation involving differential operators for which the Hamiltonian is only symmetric. In such cases, either the Hamiltonian is essentially self-adjoint, in which case the physical problem has unique solutions or one attempts to find self-adjoint extensions of the Hamiltonian corresponding to different types of boundary conditions or conditions at infinity.

Example. The one-dimensional Schrödinger operator with the potential V ( x ) = ( 1 + | x | ) α {\displaystyle V(x)=-(1+|x|)^{\alpha }} , defined initially on smooth compactly supported functions, is essentially self-adjoint for 0 < α ≤ 2 but not for α > 2.

The failure of essential self-adjointness for α > 2 {\displaystyle \alpha >2} has a counterpart in the classical dynamics of a particle with potential V ( x ) {\displaystyle V(x)} : The classical particle escapes to infinity in finite time.

Example. There is no self-adjoint momentum operator p {\displaystyle p} for a particle moving on a half-line. Nevertheless, the Hamiltonian p 2 {\displaystyle p^{2}} of a "free" particle on a half-line has several self-adjoint extensions corresponding to different types of boundary conditions. Physically, these boundary conditions are related to reflections of the particle at the origin.

Examples

A symmetric operator that is not essentially self-adjoint

We first consider the Hilbert space L 2 [ 0 , 1 ] {\displaystyle L^{2}} and the differential operator

D : ϕ 1 i ϕ {\displaystyle D:\phi \mapsto {\frac {1}{i}}\phi '}

defined on the space of continuously differentiable complex-valued functions on , satisfying the boundary conditions

ϕ ( 0 ) = ϕ ( 1 ) = 0. {\displaystyle \phi (0)=\phi (1)=0.}

Then D is a symmetric operator as can be shown by integration by parts. The spaces N+, N (defined below) are given respectively by the distributional solutions to the equation

i u = i u i u = i u {\displaystyle {\begin{aligned}-iu'&=iu\\-iu'&=-iu\end{aligned}}}

which are in L. One can show that each one of these solution spaces is 1-dimensional, generated by the functions xe and xe respectively. This shows that D is not essentially self-adjoint, but does have self-adjoint extensions. These self-adjoint extensions are parametrized by the space of unitary mappings N+N, which in this case happens to be the unit circle T.

In this case, the failure of essential self-adjointenss is due to an "incorrect" choice of boundary conditions in the definition of the domain of D {\displaystyle D} . Since D {\displaystyle D} is a first-order operator, only one boundary condition is needed to ensure that D {\displaystyle D} is symmetric. If we replaced the boundary conditions given above by the single boundary condition

ϕ ( 0 ) = ϕ ( 1 ) {\displaystyle \phi (0)=\phi (1)} ,

then D would still be symmetric and would now, in fact, be essentially self-adjoint. This change of boundary conditions gives one particular essentially self-adjoint extension of D. Other essentially self-adjoint extensions come from imposing boundary conditions of the form ϕ ( 1 ) = e i θ ϕ ( 0 ) {\displaystyle \phi (1)=e^{i\theta }\phi (0)} .

This simple example illustrates a general fact about self-adjoint extensions of symmetric differential operators P on an open set M. They are determined by the unitary maps between the eigenvalue spaces

N ± = { u L 2 ( M ) : P dist u = ± i u } {\displaystyle N_{\pm }=\left\{u\in L^{2}(M):P_{\operatorname {dist} }u=\pm iu\right\}}

where Pdist is the distributional extension of P.

Constant-coefficient operators

We next give the example of differential operators with constant coefficients. Let

P ( x ) = α c α x α {\displaystyle P\left({\vec {x}}\right)=\sum _{\alpha }c_{\alpha }x^{\alpha }}

be a polynomial on R with real coefficients, where α ranges over a (finite) set of multi-indices. Thus

α = ( α 1 , α 2 , , α n ) {\displaystyle \alpha =(\alpha _{1},\alpha _{2},\ldots ,\alpha _{n})}

and

x α = x 1 α 1 x 2 α 2 x n α n . {\displaystyle x^{\alpha }=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\cdots x_{n}^{\alpha _{n}}.}

We also use the notation

D α = 1 i | α | x 1 α 1 x 2 α 2 x n α n . {\displaystyle D^{\alpha }={\frac {1}{i^{|\alpha |}}}\partial _{x_{1}}^{\alpha _{1}}\partial _{x_{2}}^{\alpha _{2}}\cdots \partial _{x_{n}}^{\alpha _{n}}.}

Then the operator P(D) defined on the space of infinitely differentiable functions of compact support on R by

P ( D ) ϕ = α c α D α ϕ {\displaystyle P(\operatorname {D} )\phi =\sum _{\alpha }c_{\alpha }\operatorname {D} ^{\alpha }\phi }

is essentially self-adjoint on L(R).

Theorem — Let P a polynomial function on R with real coefficients, F the Fourier transform considered as a unitary map L(R) → L(R). Then F*P(D)F is essentially self-adjoint and its unique self-adjoint extension is the operator of multiplication by the function P.

More generally, consider linear differential operators acting on infinitely differentiable complex-valued functions of compact support. If M is an open subset of R

P ϕ ( x ) = α a α ( x ) [ D α ϕ ] ( x ) {\displaystyle P\phi (x)=\sum _{\alpha }a_{\alpha }(x)\left(x)}

where aα are (not necessarily constant) infinitely differentiable functions. P is a linear operator

C 0 ( M ) C 0 ( M ) . {\displaystyle C_{0}^{\infty }(M)\to C_{0}^{\infty }(M).}

Corresponding to P there is another differential operator, the formal adjoint of P

P f o r m ϕ = α D α ( a α ¯ ϕ ) {\displaystyle P^{\mathrm {*form} }\phi =\sum _{\alpha }D^{\alpha }\left({\overline {a_{\alpha }}}\phi \right)}

Theorem — The adjoint P* of P is a restriction of the distributional extension of the formal adjoint to an appropriate subspace of L 2 {\displaystyle L^{2}} . Specifically: dom P = { u L 2 ( M ) : P f o r m u L 2 ( M ) } . {\displaystyle \operatorname {dom} P^{*}=\left\{u\in L^{2}(M):P^{\mathrm {*form} }u\in L^{2}(M)\right\}.}

Spectral multiplicity theory

The multiplication representation of a self-adjoint operator, though extremely useful, is not a canonical representation. This suggests that it is not easy to extract from this representation a criterion to determine when self-adjoint operators A and B are unitarily equivalent. The finest grained representation which we now discuss involves spectral multiplicity. This circle of results is called the HahnHellinger theory of spectral multiplicity.

Uniform multiplicity

We first define uniform multiplicity:

Definition. A self-adjoint operator A has uniform multiplicity n where n is such that 1 ≤ nω if and only if A is unitarily equivalent to the operator Mf of multiplication by the function f(λ) = λ on

L μ 2 ( R , H n ) = { ψ : R H n : ψ  measurable and  R ψ ( t ) 2 d μ ( t ) < } {\displaystyle L_{\mu }^{2}\left(\mathbf {R} ,\mathbf {H} _{n}\right)=\left\{\psi :\mathbf {R} \to \mathbf {H} _{n}:\psi {\text{ measurable and }}\int _{\mathbf {R} }\|\psi (t)\|^{2}d\mu (t)<\infty \right\}}

where Hn is a Hilbert space of dimension n. The domain of Mf consists of vector-valued functions ψ on R such that

R | λ | 2   ψ ( λ ) 2 d μ ( λ ) < . {\displaystyle \int _{\mathbf {R} }|\lambda |^{2}\ \|\psi (\lambda )\|^{2}\,d\mu (\lambda )<\infty .}

Non-negative countably additive measures μ, ν are mutually singular if and only if they are supported on disjoint Borel sets.

Theorem — Let A be a self-adjoint operator on a separable Hilbert space H. Then there is an ω sequence of countably additive finite measures on R (some of which may be identically 0) { μ } 1 ω {\displaystyle \left\{\mu _{\ell }\right\}_{1\leq \ell \leq \omega }} such that the measures are pairwise singular and A is unitarily equivalent to the operator of multiplication by the function f(λ) = λ on 1 ω L μ 2 ( R , H ) . {\displaystyle \bigoplus _{1\leq \ell \leq \omega }L_{\mu _{\ell }}^{2}\left(\mathbf {R} ,\mathbf {H} _{\ell }\right).}

This representation is unique in the following sense: For any two such representations of the same A, the corresponding measures are equivalent in the sense that they have the same sets of measure 0.

Direct integrals

The spectral multiplicity theorem can be reformulated using the language of direct integrals of Hilbert spaces:

Theorem —  Any self-adjoint operator on a separable Hilbert space is unitarily equivalent to multiplication by the function λ ↦ λ on R H λ d μ ( λ ) . {\displaystyle \int _{\mathbf {R} }^{\oplus }H_{\lambda }\,d\mu (\lambda ).}

Unlike the multiplication-operator version of the spectral theorem, the direct-integral version is unique in the sense that the measure equivalence class of μ (or equivalently its sets of measure 0) is uniquely determined and the measurable function λ d i m ( H λ ) {\displaystyle \lambda \mapsto \mathrm {dim} (H_{\lambda })} is determined almost everywhere with respect to μ. The function λ dim ( H λ ) {\displaystyle \lambda \mapsto \operatorname {dim} \left(H_{\lambda }\right)} is the spectral multiplicity function of the operator.

We may now state the classification result for self-adjoint operators: Two self-adjoint operators are unitarily equivalent if and only if (1) their spectra agree as sets, (2) the measures appearing in their direct-integral representations have the same sets of measure zero, and (3) their spectral multiplicity functions agree almost everywhere with respect to the measure in the direct integral.

Example: structure of the Laplacian

The Laplacian on R is the operator

Δ = i = 1 n x i 2 . {\displaystyle \Delta =\sum _{i=1}^{n}\partial _{x_{i}}^{2}.}

As remarked above, the Laplacian is diagonalized by the Fourier transform. Actually it is more natural to consider the negative of the Laplacian −Δ since as an operator it is non-negative; (see elliptic operator).

Theorem — If n = 1, then −Δ has uniform multiplicity mult = 2 {\displaystyle {\text{mult}}=2} , otherwise −Δ has uniform multiplicity mult = ω {\displaystyle {\text{mult}}=\omega } . Moreover, the measure μmult may be taken to be Lebesgue measure on [0, ∞).

See also

Remarks

  1. The reader is invited to perform integration by parts twice and verify that the given boundary conditions for Dom ( A ) {\displaystyle \operatorname {Dom} (A)} ensure that the boundary terms in the integration by parts vanish.

Notes

  1. Reed & Simon 1980, p. 250.
  2. Pedersen 1989, 5.1.4.
  3. Reed & Simon 1980, pp. 255–256.
  4. Griffel 2002, pp. 224
  5. Hall 2013 Corollary 9.9
  6. Griffel 2002, p. 238
  7. Reed & Simon 1980, p. 195
  8. Rudin 1991, pp. 326–327
  9. Griffel 2002, pp. 224–230
  10. Griffel 2002, p. 241
  11. Hall 2013, pp. 133, 177
  12. de la Madrid Modino 2001, pp. 95–97
  13. Hall 2013 Section 9.4
  14. Bebiano & da Providência 2019.
  15. Rudin 1991, pp. 327
  16. Hall 2013, pp. 123–130
  17. Hall 2013, p. 207
  18. Akhiezer 1981, p. 152
  19. Akhiezer 1981, pp. 115–116
  20. Hall 2013, pp. 127, 207
  21. Hall 2013 Section 10.4
  22. Hall 2013, pp. 144–147, 206–207
  23. Ruelle 1969
  24. Hall 2013 Proposition 9.27
  25. Hall 2013 Proposition 9.28
  26. Hall 2013 Example 9.25
  27. Hall 2013 Theorem 9.41
  28. Berezin & Shubin 1991 p. 85
  29. Hall 2013 Section 9.10
  30. Berezin & Shubin 1991, pp. 55, 86
  31. Hall 2013, pp. 193–196
  32. Hall 2013 Chapter 2, Exercise 4
  33. Bonneau, Faraut & Valent 2001
  34. Hall 2013 Section 9.6
  35. Hall 2013 Theorems 7.19 and 10.9
  36. Hall 2013 Proposition 7.22
  37. Hall 2013 Proposition 7.24

References


Functional analysis (topicsglossary)
Spaces
Properties
Theorems
Operators
Algebras
Open problems
Applications
Advanced topics
Hilbert spaces
Basic concepts
Main results
Other results
Maps
Examples
Categories: