Misplaced Pages

Differential operator

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Linear differential operator) Typically linear operator defined in terms of differentiation of functions

A harmonic function defined on an annulus. Harmonic functions are exactly those functions which lie in the kernel of the Laplace operator, an important differential operator.

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).

This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.

Definition

Given a nonnegative integer m, an order- m {\displaystyle m} linear differential operator is a map P {\displaystyle P} from a function space F 1 {\displaystyle {\mathcal {F}}_{1}} on R n {\displaystyle \mathbb {R} ^{n}} to another function space F 2 {\displaystyle {\mathcal {F}}_{2}} that can be written as:

P = | α | m a α ( x ) D α   , {\displaystyle P=\sum _{|\alpha |\leq m}a_{\alpha }(x)D^{\alpha }\ ,} where α = ( α 1 , α 2 , , α n ) {\displaystyle \alpha =(\alpha _{1},\alpha _{2},\cdots ,\alpha _{n})} is a multi-index of non-negative integers, | α | = α 1 + α 2 + + α n {\displaystyle |\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}} , and for each α {\displaystyle \alpha } , a α ( x ) {\displaystyle a_{\alpha }(x)} is a function on some open domain in n-dimensional space. The operator D α {\displaystyle D^{\alpha }} is interpreted as

D α = | α | x 1 α 1 x 2 α 2 x n α n {\displaystyle D^{\alpha }={\frac {\partial ^{|\alpha |}}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}

Thus for a function f F 1 {\displaystyle f\in {\mathcal {F}}_{1}} :

P f = | α | m a α ( x ) | α | f x 1 α 1 x 2 α 2 x n α n {\displaystyle Pf=\sum _{|\alpha |\leq m}a_{\alpha }(x){\frac {\partial ^{|\alpha |}f}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}

The notation D α {\displaystyle D^{\alpha }} is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.

The polynomial p obtained by replacing partials x i {\displaystyle {\frac {\partial }{\partial x_{i}}}} by variables ξ i {\displaystyle \xi _{i}} in P is called the total symbol of P; i.e., the total symbol of P above is: p ( x , ξ ) = | α | m a α ( x ) ξ α {\displaystyle p(x,\xi )=\sum _{|\alpha |\leq m}a_{\alpha }(x)\xi ^{\alpha }} where ξ α = ξ 1 α 1 ξ n α n . {\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}.} The highest homogeneous component of the symbol, namely,

σ ( x , ξ ) = | α | = m a α ( x ) ξ α {\displaystyle \sigma (x,\xi )=\sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }}

is called the principal symbol of P. While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).

More generally, let E and F be vector bundles over a manifold X. Then the linear operator

P : C ( E ) C ( F ) {\displaystyle P:C^{\infty }(E)\to C^{\infty }(F)}

is a differential operator of order k {\displaystyle k} if, in local coordinates on X, we have

P u ( x ) = | α | = k P α ( x ) α u x α + lower-order terms {\displaystyle Pu(x)=\sum _{|\alpha |=k}P^{\alpha }(x){\frac {\partial ^{\alpha }u}{\partial x^{\alpha }}}+{\text{lower-order terms}}}

where, for each multi-index α, P α ( x ) : E F {\displaystyle P^{\alpha }(x):E\to F} is a bundle map, symmetric on the indices α.

The k order coefficients of P transform as a symmetric tensor

σ P : S k ( T X ) E F {\displaystyle \sigma _{P}:S^{k}(T^{*}X)\otimes E\to F}

whose domain is the tensor product of the k symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.

The coordinate system x permits a local trivialization of the cotangent bundle by the coordinate differentials dx, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components

( P u ) ν = μ P ν μ u μ {\displaystyle (Pu)_{\nu }=\sum _{\mu }P_{\nu \mu }u_{\mu }}

on each section u of E. Here Pνμ is the scalar differential operator defined by

P ν μ = α P ν μ α x α . {\displaystyle P_{\nu \mu }=\sum _{\alpha }P_{\nu \mu }^{\alpha }{\frac {\partial }{\partial x^{\alpha }}}.}

With this trivialization, the principal symbol can now be written

( σ P ( ξ ) u ) ν = | α | = k μ P ν μ α ( x ) ξ α u μ . {\displaystyle (\sigma _{P}(\xi )u)_{\nu }=\sum _{|\alpha |=k}\sum _{\mu }P_{\nu \mu }^{\alpha }(x)\xi _{\alpha }u_{\mu }.}

In the cotangent space over a fixed point x of X, the symbol σ P {\displaystyle \sigma _{P}} defines a homogeneous polynomial of degree k in T x X {\displaystyle T_{x}^{*}X} with values in Hom ( E x , F x ) {\displaystyle \operatorname {Hom} (E_{x},F_{x})} .

Fourier interpretation

A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,

P f ( x ) = 1 ( 2 π ) d 2 R d e i x ξ p ( x , i ξ ) f ^ ( ξ ) d ξ . {\displaystyle Pf(x)={\frac {1}{(2\pi )^{\frac {d}{2}}}}\int \limits _{\mathbf {R} ^{d}}e^{ix\cdot \xi }p(x,i\xi ){\hat {f}}(\xi )\,d\xi .}

This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.

Examples

= x ^ x + y ^ y + z ^ z . {\displaystyle \nabla =\mathbf {\hat {x}} {\partial \over \partial x}+\mathbf {\hat {y}} {\partial \over \partial y}+\mathbf {\hat {z}} {\partial \over \partial z}.}
Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.

History

The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.

Notations

The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:

d d x {\displaystyle {d \over dx}} , D {\displaystyle D} , D x , {\displaystyle D_{x},} and x {\displaystyle \partial _{x}} .

When taking higher, nth order derivatives, the operator may be written:

d n d x n {\displaystyle {d^{n} \over dx^{n}}} , D n {\displaystyle D^{n}} , D x n {\displaystyle D_{x}^{n}} , or x n {\displaystyle \partial _{x}^{n}} .

The derivative of a function f of an argument x is sometimes given as either of the following:

[ f ( x ) ] {\displaystyle '}
f ( x ) . {\displaystyle f'(x).}

The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form

k = 0 n c k D k {\displaystyle \sum _{k=0}^{n}c_{k}D^{k}}

in his study of differential equations.

One of the most frequently seen differential operators is the Laplacian operator, defined by

Δ = 2 = k = 1 n 2 x k 2 . {\displaystyle \Delta =\nabla ^{2}=\sum _{k=1}^{n}{\frac {\partial ^{2}}{\partial x_{k}^{2}}}.}

Another differential operator is the Θ operator, or theta operator, defined by

Θ = z d d z . {\displaystyle \Theta =z{d \over dz}.}

This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z: Θ ( z k ) = k z k , k = 0 , 1 , 2 , {\displaystyle \Theta (z^{k})=kz^{k},\quad k=0,1,2,\dots }

In n variables the homogeneity operator is given by Θ = k = 1 n x k x k . {\displaystyle \Theta =\sum _{k=1}^{n}x_{k}{\frac {\partial }{\partial x_{k}}}.}

As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)

In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:

f x g = g x f {\displaystyle f{\overleftarrow {\partial _{x}}}g=g\cdot \partial _{x}f}
f x g = f x g {\displaystyle f{\overrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g}
f x g = f x g g x f . {\displaystyle f{\overleftrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g-g\cdot \partial _{x}f.}

Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.

Adjoint of an operator

See also: Hermitian adjoint

Given a linear differential operator T {\displaystyle T} T u = k = 0 n a k ( x ) D k u {\displaystyle Tu=\sum _{k=0}^{n}a_{k}(x)D^{k}u} the adjoint of this operator is defined as the operator T {\displaystyle T^{*}} such that T u , v = u , T v {\displaystyle \langle Tu,v\rangle =\langle u,T^{*}v\rangle } where the notation , {\displaystyle \langle \cdot ,\cdot \rangle } is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).

Formal adjoint in one variable

In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by f , g = a b f ( x ) ¯ g ( x ) d x , {\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}\,g(x)\,dx,}

where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as x a {\displaystyle x\to a} and x b {\displaystyle x\to b} , one can also define the adjoint of T by T u = k = 0 n ( 1 ) k D k [ a k ( x ) ¯ u ] . {\displaystyle T^{*}u=\sum _{k=0}^{n}(-1)^{k}D^{k}\left.}

This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When T {\displaystyle T^{*}} is defined according to this formula, it is called the formal adjoint of T.

A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.

Several variables

If Ω is a domain in R, and P a differential operator on Ω, then the adjoint of P is defined in L(Ω) by duality in the analogous manner:

f , P g L 2 ( Ω ) = P f , g L 2 ( Ω ) {\displaystyle \langle f,P^{*}g\rangle _{L^{2}(\Omega )}=\langle Pf,g\rangle _{L^{2}(\Omega )}}

for all smooth L functions f, g. Since smooth functions are dense in L, this defines the adjoint on a dense subset of L: P is a densely defined operator.

Example

The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form

L u = ( p u ) + q u = ( p u + p u ) + q u = p u p u + q u = ( p ) D 2 u + ( p ) D u + ( q ) u . {\displaystyle Lu=-(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p)D^{2}u+(-p')Du+(q)u.}

This property can be proven using the formal adjoint definition above.

This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.

Properties

Differentiation is linear, i.e.

D ( f + g ) = ( D f ) + ( D g ) , {\displaystyle D(f+g)=(Df)+(Dg),}
D ( a f ) = a ( D f ) , {\displaystyle D(af)=a(Df),}

where f and g are functions, and a is a constant.

Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule

( D 1 D 2 ) ( f ) = D 1 ( D 2 ( f ) ) . {\displaystyle (D_{1}\circ D_{2})(f)=D_{1}(D_{2}(f)).}

Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:

D x x D = 1. {\displaystyle Dx-xD=1.}

The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.

The differential operators also obey the shift theorem.

Ring of polynomial differential operators

Ring of univariate polynomial differential operators

Main article: Weyl algebra

If R is a ring, let R D , X {\displaystyle R\langle D,X\rangle } be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DXXD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring R D , X / I {\displaystyle R\langle D,X\rangle /I} . This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form X a D b  mod  I {\displaystyle X^{a}D^{b}{\text{ mod }}I} . It supports an analogue of Euclidean division of polynomials.

Differential modules over R [ X ] {\displaystyle R} (for the standard derivation) can be identified with modules over R D , X / I {\displaystyle R\langle D,X\rangle /I} .

Ring of multivariate polynomial differential operators

If R is a ring, let R D 1 , , D n , X 1 , , X n {\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle } be the non-commutative polynomial ring over R in the variables D 1 , , D n , X 1 , , X n {\displaystyle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}} , and I the two-sided ideal generated by the elements

( D i X j X j D i ) δ i , j ,       D i D j D j D i ,       X i X j X j X i {\displaystyle (D_{i}X_{j}-X_{j}D_{i})-\delta _{i,j},\ \ \ D_{i}D_{j}-D_{j}D_{i},\ \ \ X_{i}X_{j}-X_{j}X_{i}}

for all 1 i , j n , {\displaystyle 1\leq i,j\leq n,} where δ {\displaystyle \delta } is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring R D 1 , , D n , X 1 , , X n / I {\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle /I} .

This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form X 1 a 1 X n a n D 1 b 1 D n b n {\displaystyle X_{1}^{a_{1}}\ldots X_{n}^{a_{n}}D_{1}^{b_{1}}\ldots D_{n}^{b_{n}}} .

Coordinate-independent description

In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle J(E). In other words, there exists a linear mapping of vector bundles

i P : J k ( E ) F {\displaystyle i_{P}:J^{k}(E)\to F}

such that

P = i P j k {\displaystyle P=i_{P}\circ j^{k}}

where j: Γ(E) → Γ(J(E)) is the prolongation that associates to any section of E its k-jet.

This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.

Relation to commutative algebra

An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions f 0 , , f k C ( M ) {\displaystyle f_{0},\ldots ,f_{k}\in C^{\infty }(M)} we have

[ f k , [ f k 1 , [ [ f 0 , P ] ] ] = 0. {\displaystyle \cdots ]]=0.}

Here the bracket [ f , P ] : Γ ( E ) Γ ( F ) {\displaystyle :\Gamma (E)\to \Gamma (F)} is defined as the commutator

[ f , P ] ( s ) = P ( f s ) f P ( s ) . {\displaystyle (s)=P(f\cdot s)-f\cdot P(s).}

This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.

Variants

A differential operator of infinite order

A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.

Bidifferential operator

A differential operator acting on two functions D ( g , f ) {\displaystyle D(g,f)} is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.

Microdifferential operator

A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.

See also

Notes

  1. Hörmander 1983, p. 151.
  2. Schapira 1985, 1.1.7
  3. James Gasser (editor), A Boole Anthology: Recent and classical studies in the logic of George Boole (2000), p. 169; Google Books.
  4. E. W. Weisstein. "Theta Operator". Retrieved 2009-06-12.
  5. L u = ( 1 ) 2 D 2 [ ( p ) u ] + ( 1 ) 1 D [ ( p ) u ] + ( 1 ) 0 ( q u ) = D 2 ( p u ) + D ( p u ) + q u = ( p u ) + ( p u ) + q u = p u 2 p u p u + p u + p u + q u = p u p u + q u = ( p u ) + q u = L u {\displaystyle {\begin{aligned}L^{*}u&{}=(-1)^{2}D^{2}+(-1)^{1}D+(-1)^{0}(qu)\\&{}=-D^{2}(pu)+D(p'u)+qu\\&{}=-(pu)''+(p'u)'+qu\\&{}=-p''u-2p'u'-pu''+p''u+p'u'+qu\\&{}=-p'u'-pu''+qu\\&{}=-(pu')'+qu\\&{}=Lu\end{aligned}}}
  6. Omori, Hideki; Maeda, Y.; Yoshioka, A. (1992). "Deformation quantization of Poisson algebras". Proceedings of the Japan Academy, Series A, Mathematical Sciences. 68 (5). doi:10.3792/PJAA.68.97. S2CID 119540529.
  7. Schapira 1985, § 1.2. § 1.3.

References

Further reading

External links

Differential equations
Classification
Operations
Attributes of variables
Relation to processes
Solutions
Existence/uniqueness
Solution topics
Solution methods
Examples
Mathematicians
Functional analysis (topicsglossary)
Spaces
Properties
Theorems
Operators
Algebras
Open problems
Applications
Advanced topics
Categories: