Misplaced Pages

Divergence theorem

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Divergence Theorem) Theorem in calculus which relates the flux of closed surfaces to divergence over their volume "Gauss's theorem" redirects here. For the theorem concerning the electric field, see Gauss's law. "Ostrogradsky's theorem" redirects here. For the theorem in mechanics, see Ostrogradsky instability.
Part of a series of articles about
Calculus
a b f ( t ) d t = f ( b ) f ( a ) {\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)}
Differential
Definitions
Concepts
Rules and identities
Integral
Definitions
Integration by
Series
Convergence tests
Vector
Theorems
Multivariable
Formalisms
Definitions
Advanced
Specialized
Miscellanea

In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a theorem relating the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed.

More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the "flux" through the surface, is equal to the volume integral of the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region".

The divergence theorem is an important result for the mathematics of physics and engineering, particularly in electrostatics and fluid dynamics. In these fields, it is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Green's theorem.

Explanation using liquid flow

See also: Sources and sinks

Vector fields are often illustrated using the example of the velocity field of a fluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by a vector, so that the velocity of the liquid at any moment forms a vector field. Consider an imaginary closed surface S inside a body of liquid, enclosing a volume of liquid. The flux of liquid out of the volume at any time is equal to the volume rate of fluid crossing this surface, i.e., the surface integral of the velocity over the surface.

Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are no sources or sinks inside the volume then the flux of liquid out of S is zero. If the liquid is moving, it may flow into the volume at some points on the surface S and out of the volume at other points, but the amounts flowing in and out at any moment are equal, so the net flux of liquid out of the volume is zero.

However if a source of liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surface S. The flux outward through S equals the volume rate of flow of fluid into S from the pipe. Similarly if there is a sink or drain inside S, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surface S equals the rate of liquid removed by the sink.

If there are multiple sources and sinks of liquid inside S, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to the divergence of the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed by S equals the volume rate of flux through S. This is the divergence theorem.

The divergence theorem is employed in any conservation law which states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary.

Mathematical statement

A region V bounded by the surface S = V {\displaystyle S=\partial V} with the surface normal n

Suppose V is a subset of R n {\displaystyle \mathbb {R} ^{n}} (in the case of n = 3, V represents a volume in three-dimensional space) which is compact and has a piecewise smooth boundary S (also indicated with V = S {\displaystyle \partial V=S} ). If F is a continuously differentiable vector field defined on a neighborhood of V, then:

V ( F ) d V = {\displaystyle \iiint _{V}\left(\mathbf {\nabla } \cdot \mathbf {F} \right)\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} ( F n ^ ) d S . {\displaystyle (\mathbf {F} \cdot \mathbf {\hat {n}} )\,\mathrm {d} S.}

The left side is a volume integral over the volume V, and the right side is the surface integral over the boundary of the volume V. The closed, measurable set V {\displaystyle \partial V} is oriented by outward-pointing normals, and n ^ {\displaystyle \mathbf {\hat {n}} } is the outward pointing unit normal at almost each point on the boundary V {\displaystyle \partial V} . ( d S {\displaystyle \mathrm {d} \mathbf {S} } may be used as a shorthand for n d S {\displaystyle \mathbf {n} \mathrm {d} S} .) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume V, and the right-hand side represents the total flow across the boundary S.

Informal derivation

The divergence theorem follows from the fact that if a volume V is partitioned into separate parts, the flux out of the original volume is equal to the algebraic sum of the flux out of each component volume. This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed.

A volume divided into two subvolumes. At right the two subvolumes are separated to show the flux out of the different surfaces.

See the diagram. A closed, bounded volume V is divided into two volumes V1 and V2 by a surface S3 (green). The flux Φ(Vi) out of each component region Vi is equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is

Φ ( V 1 ) + Φ ( V 2 ) = Φ 1 + Φ 31 + Φ 2 + Φ 32 {\displaystyle \Phi (V_{\text{1}})+\Phi (V_{\text{2}})=\Phi _{\text{1}}+\Phi _{\text{31}}+\Phi _{\text{2}}+\Phi _{\text{32}}}

where Φ1 and Φ2 are the flux out of surfaces S1 and S2, Φ31 is the flux through S3 out of volume 1, and Φ32 is the flux through S3 out of volume 2. The point is that surface S3 is part of the surface of both volumes. The "outward" direction of the normal vector n ^ {\displaystyle \mathbf {\hat {n}} } is opposite for each volume, so the flux out of one through S3 is equal to the negative of the flux out of the other so these two fluxes cancel in the sum.

Φ 31 = S 3 F n ^ d S = S 3 F ( n ^ ) d S = Φ 32 {\displaystyle \Phi _{\text{31}}=\iint _{S_{3}}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=-\iint _{S_{3}}\mathbf {F} \cdot (-\mathbf {\hat {n}} )\;\mathrm {d} S=-\Phi _{\text{32}}}

Therefore:

Φ ( V 1 ) + Φ ( V 2 ) = Φ 1 + Φ 2 {\displaystyle \Phi (V_{\text{1}})+\Phi (V_{\text{2}})=\Phi _{\text{1}}+\Phi _{\text{2}}}

Since the union of surfaces S1 and S2 is S

Φ ( V 1 ) + Φ ( V 2 ) = Φ ( V ) {\displaystyle \Phi (V_{\text{1}})+\Phi (V_{\text{2}})=\Phi (V)}
The volume can be divided into any number of subvolumes and the flux out of V is equal to the sum of the flux out of each subvolume, because the flux through the green surfaces cancels out in the sum. In (b) the volumes are shown separated slightly, illustrating that each green partition is part of the boundary of two adjacent volumes

This principle applies to a volume divided into any number of parts, as shown in the diagram. Since the integral over each internal partition (green surfaces) appears with opposite signs in the flux of the two adjacent volumes they cancel out, and the only contribution to the flux is the integral over the external surfaces (grey). Since the external surfaces of all the component volumes equal the original surface.

Φ ( V ) = V i V Φ ( V i ) {\displaystyle \Phi (V)=\sum _{V_{\text{i}}\subset V}\Phi (V_{\text{i}})}
As the volume is subdivided into smaller parts, the ratio of the flux Φ ( V i ) {\displaystyle \Phi (V_{\text{i}})} out of each volume to the volume | V i | {\displaystyle |V_{\text{i}}|} approaches div F {\displaystyle \operatorname {div} \mathbf {F} }

The flux Φ out of each volume is the surface integral of the vector field F(x) over the surface

S ( V ) F n ^ d S = V i V S ( V i ) F n ^ d S {\displaystyle \iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\sum _{V_{\text{i}}\subset V}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S}

The goal is to divide the original volume into infinitely many infinitesimal volumes. As the volume is divided into smaller and smaller parts, the surface integral on the right, the flux out of each subvolume, approaches zero because the surface area S(Vi) approaches zero. However, from the definition of divergence, the ratio of flux to volume, Φ ( V i ) | V i | = 1 | V i | S ( V i ) F n ^ d S {\displaystyle {\frac {\Phi (V_{\text{i}})}{|V_{\text{i}}|}}={\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S} , the part in parentheses below, does not in general vanish but approaches the divergence div F as the volume approaches zero.

S ( V ) F n ^ d S = V i V ( 1 | V i | S ( V i ) F n ^ d S ) | V i | {\displaystyle \iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\sum _{V_{\text{i}}\subset V}\left({\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S\right)|V_{\text{i}}|}

As long as the vector field F(x) has continuous derivatives, the sum above holds even in the limit when the volume is divided into infinitely small increments

S ( V ) F n ^ d S = lim | V i | 0 V i V ( 1 | V i | S ( V i ) F n ^ d S ) | V i | {\displaystyle \iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\lim _{|V_{\text{i}}|\to 0}\sum _{V_{\text{i}}\subset V}\left({\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S\right)|V_{\text{i}}|}

As | V i | {\displaystyle |V_{\text{i}}|} approaches zero volume, it becomes the infinitesimal dV, the part in parentheses becomes the divergence, and the sum becomes a volume integral over V

S ( V ) F n ^ d S = V div F d V {\displaystyle \;\iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\iiint _{V}\operatorname {div} \mathbf {F} \;\mathrm {d} V\;}

Since this derivation is coordinate free, it shows that the divergence does not depend on the coordinates used.

Proofs

For bounded open subsets of Euclidean space

We are going to prove the following:

Theorem — Let Ω R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} be open and bounded with C 1 {\displaystyle C^{1}} boundary. If u {\displaystyle u} is C 1 {\displaystyle C^{1}} on an open neighborhood O {\displaystyle O} of Ω ¯ {\displaystyle {\overline {\Omega }}} , that is, u C 1 ( O ) {\displaystyle u\in C^{1}(O)} , then for each i { 1 , , n } {\displaystyle i\in \{1,\dots ,n\}} , Ω u x i d V = Ω u ν i d S , {\displaystyle \int _{\Omega }u_{x_{i}}\,dV=\int _{\partial \Omega }u\nu _{i}\,dS,} where ν : Ω R n {\displaystyle \nu :\partial \Omega \to \mathbb {R} ^{n}} is the outward pointing unit normal vector to Ω {\displaystyle \partial \Omega } . Equivalently, Ω u d V = Ω u ν d S . {\displaystyle \int _{\Omega }\nabla u\,dV=\int _{\partial \Omega }u\nu \,dS.}

Proof of Theorem.

  1. The first step is to reduce to the case where u C c 1 ( R n ) {\displaystyle u\in C_{c}^{1}(\mathbb {R} ^{n})} . Pick ϕ C c ( O ) {\displaystyle \phi \in C_{c}^{\infty }(O)} such that ϕ = 1 {\displaystyle \phi =1} on Ω ¯ {\displaystyle {\overline {\Omega }}} . Note that ϕ u C c 1 ( O ) C c 1 ( R n ) {\displaystyle \phi u\in C_{c}^{1}(O)\subset C_{c}^{1}(\mathbb {R} ^{n})} and ϕ u = u {\displaystyle \phi u=u} on Ω ¯ {\displaystyle {\overline {\Omega }}} . Hence it suffices to prove the theorem for ϕ u {\displaystyle \phi u} . Hence we may assume that u C c 1 ( R n ) {\displaystyle u\in C_{c}^{1}(\mathbb {R} ^{n})} .
  2. Let x 0 Ω {\displaystyle x_{0}\in \partial \Omega } be arbitrary. The assumption that Ω ¯ {\displaystyle {\overline {\Omega }}} has C 1 {\displaystyle C^{1}} boundary means that there is an open neighborhood U {\displaystyle U} of x 0 {\displaystyle x_{0}} in R n {\displaystyle \mathbb {R} ^{n}} such that Ω U {\displaystyle \partial \Omega \cap U} is the graph of a C 1 {\displaystyle C^{1}} function with Ω U {\displaystyle \Omega \cap U} lying on one side of this graph. More precisely, this means that after a translation and rotation of Ω {\displaystyle \Omega } , there are r > 0 {\displaystyle r>0} and h > 0 {\displaystyle h>0} and a C 1 {\displaystyle C^{1}} function g : R n 1 R {\displaystyle g:\mathbb {R} ^{n-1}\to \mathbb {R} } , such that with the notation

    x = ( x 1 , , x n 1 ) , {\displaystyle x'=(x_{1},\dots ,x_{n-1}),} it holds that U = { x R n : | x | < r  and  | x n g ( x ) | < h } {\displaystyle U=\{x\in \mathbb {R} ^{n}:|x'|<r{\text{ and }}|x_{n}-g(x')|<h\}} and for x U {\displaystyle x\in U} , x n = g ( x ) x Ω , h < x n g ( x ) < 0 x Ω , 0 < x n g ( x ) < h x Ω . {\displaystyle {\begin{aligned}x_{n}=g(x')&\implies x\in \partial \Omega ,\\-h<x_{n}-g(x')<0&\implies x\in \Omega ,\\0<x_{n}-g(x')<h&\implies x\notin \Omega .\\\end{aligned}}}

    Since Ω {\displaystyle \partial \Omega } is compact, we can cover Ω {\displaystyle \partial \Omega } with finitely many neighborhoods U 1 , , U N {\displaystyle U_{1},\dots ,U_{N}} of the above form. Note that { Ω , U 1 , , U N } {\displaystyle \{\Omega ,U_{1},\dots ,U_{N}\}} is an open cover of Ω ¯ = Ω Ω {\displaystyle {\overline {\Omega }}=\Omega \cup \partial \Omega } . By using a C {\displaystyle C^{\infty }} partition of unity subordinate to this cover, it suffices to prove the theorem in the case where either u {\displaystyle u} has compact support in Ω {\displaystyle \Omega } or u {\displaystyle u} has compact support in some U j {\displaystyle U_{j}} . If u {\displaystyle u} has compact support in Ω {\displaystyle \Omega } , then for all i { 1 , , n } {\displaystyle i\in \{1,\dots ,n\}} , Ω u x i d V = R n u x i d V = R n 1 u x i ( x ) d x i d x = 0 {\displaystyle \int _{\Omega }u_{x_{i}}\,dV=\int _{\mathbb {R} ^{n}}u_{x_{i}}\,dV=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{\infty }u_{x_{i}}(x)\,dx_{i}\,dx'=0} by the fundamental theorem of calculus, and Ω u ν i d S = 0 {\displaystyle \int _{\partial \Omega }u\nu _{i}\,dS=0} since u {\displaystyle u} vanishes on a neighborhood of Ω {\displaystyle \partial \Omega } . Thus the theorem holds for u {\displaystyle u} with compact support in Ω {\displaystyle \Omega } . Thus we have reduced to the case where u {\displaystyle u} has compact support in some U j {\displaystyle U_{j}} .
  3. So assume u {\displaystyle u} has compact support in some U j {\displaystyle U_{j}} . The last step now is to show that the theorem is true by direct computation. Change notation to U = U j {\displaystyle U=U_{j}} , and bring in the notation from (2) used to describe U {\displaystyle U} . Note that this means that we have rotated and translated Ω {\displaystyle \Omega } . This is a valid reduction since the theorem is invariant under rotations and translations of coordinates. Since u ( x ) = 0 {\displaystyle u(x)=0} for | x | r {\displaystyle |x'|\geq r} and for | x n g ( x ) | h {\displaystyle |x_{n}-g(x')|\geq h} , we have for each i { 1 , , n } {\displaystyle i\in \{1,\dots ,n\}} that Ω u x i d V = | x | < r g ( x ) h g ( x ) u x i ( x , x n ) d x n d x = R n 1 g ( x ) u x i ( x , x n ) d x n d x . {\displaystyle {\begin{aligned}\int _{\Omega }u_{x_{i}}\,dV&=\int _{|x'|<r}\int _{g(x')-h}^{g(x')}u_{x_{i}}(x',x_{n})\,dx_{n}\,dx'\\&=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}u_{x_{i}}(x',x_{n})\,dx_{n}\,dx'.\end{aligned}}} For i = n {\displaystyle i=n} we have by the fundamental theorem of calculus that R n 1 g ( x ) u x n ( x , x n ) d x n d x = R n 1 u ( x , g ( x ) ) d x . {\displaystyle \int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}u_{x_{n}}(x',x_{n})\,dx_{n}\,dx'=\int _{\mathbb {R} ^{n-1}}u(x',g(x'))\,dx'.} Now fix i { 1 , , n 1 } {\displaystyle i\in \{1,\dots ,n-1\}} . Note that R n 1 g ( x ) u x i ( x , x n ) d x n d x = R n 1 0 u x i ( x , g ( x ) + s ) d s d x {\displaystyle \int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}u_{x_{i}}(x',x_{n})\,dx_{n}\,dx'=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}u_{x_{i}}(x',g(x')+s)\,ds\,dx'} Define v : R n R {\displaystyle v:\mathbb {R} ^{n}\to \mathbb {R} } by v ( x , s ) = u ( x , g ( x ) + s ) {\displaystyle v(x',s)=u(x',g(x')+s)} . By the chain rule, v x i ( x , s ) = u x i ( x , g ( x ) + s ) + u x n ( x , g ( x ) + s ) g x i ( x ) . {\displaystyle v_{x_{i}}(x',s)=u_{x_{i}}(x',g(x')+s)+u_{x_{n}}(x',g(x')+s)g_{x_{i}}(x').} But since v {\displaystyle v} has compact support, we can integrate out d x i {\displaystyle dx_{i}} first to deduce that R n 1 0 v x i ( x , s ) d s d x = 0. {\displaystyle \int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}v_{x_{i}}(x',s)\,ds\,dx'=0.} Thus R n 1 0 u x i ( x , g ( x ) + s ) d s d x = R n 1 0 u x n ( x , g ( x ) + s ) g x i ( x ) d s d x = R n 1 u ( x , g ( x ) ) g x i ( x ) d x . {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}u_{x_{i}}(x',g(x')+s)\,ds\,dx'&=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{0}-u_{x_{n}}(x',g(x')+s)g_{x_{i}}(x')\,ds\,dx'\\&=\int _{\mathbb {R} ^{n-1}}-u(x',g(x'))g_{x_{i}}(x')\,dx'.\end{aligned}}} In summary, with u = ( u x 1 , , u x n ) {\displaystyle \nabla u=(u_{x_{1}},\dots ,u_{x_{n}})} we have Ω u d V = R n 1 g ( x ) u d V = R n 1 u ( x , g ( x ) ) ( g ( x ) , 1 ) d x . {\displaystyle \int _{\Omega }\nabla u\,dV=\int _{\mathbb {R} ^{n-1}}\int _{-\infty }^{g(x')}\nabla u\,dV=\int _{\mathbb {R} ^{n-1}}u(x',g(x'))(-\nabla g(x'),1)\,dx'.} Recall that the outward unit normal to the graph Γ {\displaystyle \Gamma } of g {\displaystyle g} at a point ( x , g ( x ) ) Γ {\displaystyle (x',g(x'))\in \Gamma } is ν ( x , g ( x ) ) = 1 1 + | g ( x ) | 2 ( g ( x ) , 1 ) {\displaystyle \nu (x',g(x'))={\frac {1}{\sqrt {1+|\nabla g(x')|^{2}}}}(-\nabla g(x'),1)} and that the surface element d S {\displaystyle dS} is given by d S = 1 + | g ( x ) | 2 d x {\textstyle dS={\sqrt {1+|\nabla g(x')|^{2}}}\,dx'} . Thus Ω u d V = Ω u ν d S . {\displaystyle \int _{\Omega }\nabla u\,dV=\int _{\partial \Omega }u\nu \,dS.} This completes the proof.

For compact Riemannian manifolds with boundary

We are going to prove the following:

Theorem — Let Ω ¯ {\displaystyle {\overline {\Omega }}} be a C 2 {\displaystyle C^{2}} compact manifold with boundary with C 1 {\displaystyle C^{1}} metric tensor g {\displaystyle g} . Let Ω {\displaystyle \Omega } denote the manifold interior of Ω ¯ {\displaystyle {\overline {\Omega }}} and let Ω {\displaystyle \partial \Omega } denote the manifold boundary of Ω ¯ {\displaystyle {\overline {\Omega }}} . Let ( , ) {\displaystyle (\cdot ,\cdot )} denote L 2 ( Ω ¯ ) {\displaystyle L^{2}({\overline {\Omega }})} inner products of functions and , {\displaystyle \langle \cdot ,\cdot \rangle } denote inner products of vectors. Suppose u C 1 ( Ω ¯ , R ) {\displaystyle u\in C^{1}({\overline {\Omega }},\mathbb {R} )} and X {\displaystyle X} is a C 1 {\displaystyle C^{1}} vector field on Ω ¯ {\displaystyle {\overline {\Omega }}} . Then ( grad u , X ) = ( u , div X ) + Ω u X , N d S , {\displaystyle (\operatorname {grad} u,X)=-(u,\operatorname {div} X)+\int _{\partial \Omega }u\langle X,N\rangle \,dS,} where N {\displaystyle N} is the outward-pointing unit normal vector to Ω {\displaystyle \partial \Omega } .

Proof of Theorem. We use the Einstein summation convention. By using a partition of unity, we may assume that u {\displaystyle u} and X {\displaystyle X} have compact support in a coordinate patch O Ω ¯ {\displaystyle O\subset {\overline {\Omega }}} . First consider the case where the patch is disjoint from Ω {\displaystyle \partial \Omega } . Then O {\displaystyle O} is identified with an open subset of R n {\displaystyle \mathbb {R} ^{n}} and integration by parts produces no boundary terms: ( grad u , X ) = O grad u , X g d x = O j u X j g d x = O u j ( g X j ) d x = O u 1 g j ( g X j ) g d x = ( u , 1 g j ( g X j ) ) = ( u , div X ) . {\displaystyle {\begin{aligned}(\operatorname {grad} u,X)&=\int _{O}\langle \operatorname {grad} u,X\rangle {\sqrt {g}}\,dx\\&=\int _{O}\partial _{j}uX^{j}{\sqrt {g}}\,dx\\&=-\int _{O}u\partial _{j}({\sqrt {g}}X^{j})\,dx\\&=-\int _{O}u{\frac {1}{\sqrt {g}}}\partial _{j}({\sqrt {g}}X^{j}){\sqrt {g}}\,dx\\&=(u,-{\frac {1}{\sqrt {g}}}\partial _{j}({\sqrt {g}}X^{j}))\\&=(u,-\operatorname {div} X).\end{aligned}}} In the last equality we used the Voss-Weyl coordinate formula for the divergence, although the preceding identity could be used to define div {\displaystyle -\operatorname {div} } as the formal adjoint of grad {\displaystyle \operatorname {grad} } . Now suppose O {\displaystyle O} intersects Ω {\displaystyle \partial \Omega } . Then O {\displaystyle O} is identified with an open set in R + n = { x R n : x n 0 } {\displaystyle \mathbb {R} _{+}^{n}=\{x\in \mathbb {R} ^{n}:x_{n}\geq 0\}} . We zero extend u {\displaystyle u} and X {\displaystyle X} to R + n {\displaystyle \mathbb {R} _{+}^{n}} and perform integration by parts to obtain ( grad u , X ) = O grad u , X g d x = R + n j u X j g d x = ( u , div X ) R n 1 u ( x , 0 ) X n ( x , 0 ) g ( x , 0 ) d x , {\displaystyle {\begin{aligned}(\operatorname {grad} u,X)&=\int _{O}\langle \operatorname {grad} u,X\rangle {\sqrt {g}}\,dx\\&=\int _{\mathbb {R} _{+}^{n}}\partial _{j}uX^{j}{\sqrt {g}}\,dx\\&=(u,-\operatorname {div} X)-\int _{\mathbb {R} ^{n-1}}u(x',0)X^{n}(x',0){\sqrt {g(x',0)}}\,dx',\end{aligned}}} where d x = d x 1 d x n 1 {\displaystyle dx'=dx_{1}\dots dx_{n-1}} . By a variant of the straightening theorem for vector fields, we may choose O {\displaystyle O} so that x n {\displaystyle {\frac {\partial }{\partial x_{n}}}} is the inward unit normal N {\displaystyle -N} at Ω {\displaystyle \partial \Omega } . In this case g ( x , 0 ) d x = g Ω ( x ) d x = d S {\displaystyle {\sqrt {g(x',0)}}\,dx'={\sqrt {g_{\partial \Omega }(x')}}\,dx'=dS} is the volume element on Ω {\displaystyle \partial \Omega } and the above formula reads ( grad u , X ) = ( u , div X ) + Ω u X , N d S . {\displaystyle (\operatorname {grad} u,X)=(u,-\operatorname {div} X)+\int _{\partial \Omega }u\langle X,N\rangle \,dS.} This completes the proof.

Corollaries

By replacing F in the divergence theorem with specific forms, other useful identities can be derived (cf. vector identities).

  • With F F g {\displaystyle \mathbf {F} \rightarrow \mathbf {F} g} for a scalar function g and a vector field F,
V [ F ( g ) + g ( F ) ] d V = {\displaystyle \iiint _{V}\left\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} g F n d S . {\displaystyle g\mathbf {F} \cdot \mathbf {n} \mathrm {d} S.}
A special case of this is F = f {\displaystyle \mathbf {F} =\nabla f} , in which case the theorem is the basis for Green's identities.
  • With F F × G {\displaystyle \mathbf {F} \rightarrow \mathbf {F} \times \mathbf {G} } for two vector fields F and G, where × {\displaystyle \times } denotes a cross product,
V ( F × G ) d V = V [ G ( × F ) F ( × G ) ] d V = {\displaystyle \iiint _{V}\nabla \cdot \left(\mathbf {F} \times \mathbf {G} \right)\mathrm {d} V=\iiint _{V}\left\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} ( F × G ) n d S . {\displaystyle (\mathbf {F} \times \mathbf {G} )\cdot \mathbf {n} \mathrm {d} S.}
  • With F F G {\displaystyle \mathbf {F} \rightarrow \mathbf {F} \cdot \mathbf {G} } for two vector fields F and G, where {\displaystyle \cdot } denotes a dot product,
V ( F G ) d V = V [ ( G ) F + ( F ) G ] d V = {\displaystyle \iiint _{V}\nabla \left(\mathbf {F} \cdot \mathbf {G} \right)\mathrm {d} V=\iiint _{V}\left\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} ( F G ) n d S . {\displaystyle (\mathbf {F} \cdot \mathbf {G} )\mathbf {n} \mathrm {d} S.}
  • With F f c {\displaystyle \mathbf {F} \rightarrow f\mathbf {c} } for a scalar function  f  and vector field c:
V c f d V = {\displaystyle \iiint _{V}\mathbf {c} \cdot \nabla f\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} ( c f ) n d S V f ( c ) d V . {\displaystyle (\mathbf {c} f)\cdot \mathbf {n} \mathrm {d} S-\iiint _{V}f(\nabla \cdot \mathbf {c} )\,\mathrm {d} V.}
The last term on the right vanishes for constant c {\displaystyle \mathbf {c} } or any divergence free (solenoidal) vector field, e.g. Incompressible flows without sources or sinks such as phase change or chemical reactions etc. In particular, taking c {\displaystyle \mathbf {c} } to be constant:
V f d V = {\displaystyle \iiint _{V}\nabla f\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} f n d S . {\displaystyle f\mathbf {n} \mathrm {d} S.}
  • With F c × F {\displaystyle \mathbf {F} \rightarrow \mathbf {c} \times \mathbf {F} } for vector field F and constant vector c:
V c ( × F ) d V = {\displaystyle \iiint _{V}\mathbf {c} \cdot (\nabla \times \mathbf {F} )\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} ( F × c ) n d S . {\displaystyle (\mathbf {F} \times \mathbf {c} )\cdot \mathbf {n} \mathrm {d} S.}
By reordering the triple product on the right hand side and taking out the constant vector of the integral,
V ( × F ) d V c = {\displaystyle \iiint _{V}(\nabla \times \mathbf {F} )\,\mathrm {d} V\cdot \mathbf {c} =} \oiint S {\displaystyle \scriptstyle S} ( d S × F ) c . {\displaystyle (\mathrm {d} \mathbf {S} \times \mathbf {F} )\cdot \mathbf {c} .}
Hence,
V ( × F ) d V = {\displaystyle \iiint _{V}(\nabla \times \mathbf {F} )\,\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} n × F d S . {\displaystyle \mathbf {n} \times \mathbf {F} \mathrm {d} S.}

Example

The vector field corresponding to the example shown. Vectors may point into or out of the sphere.
The divergence theorem can be used to calculate a flux through a closed surface that fully encloses a volume, like any of the surfaces on the left. It can not directly be used to calculate the flux through surfaces with boundaries, like those on the right. (Surfaces are blue, boundaries are red.)

Suppose we wish to evaluate

\oiint S {\displaystyle \scriptstyle S} F n d S , {\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S,}

where S is the unit sphere defined by

S = { ( x , y , z ) R 3   :   x 2 + y 2 + z 2 = 1 } , {\displaystyle S=\left\{(x,y,z)\in \mathbb {R} ^{3}\ :\ x^{2}+y^{2}+z^{2}=1\right\},}

and F is the vector field

F = 2 x i + y 2 j + z 2 k . {\displaystyle \mathbf {F} =2x\mathbf {i} +y^{2}\mathbf {j} +z^{2}\mathbf {k} .}

The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem, because the divergence theorem says that the integral is equal to:

W ( F ) d V = 2 W ( 1 + y + z ) d V = 2 W d V + 2 W y d V + 2 W z d V , {\displaystyle \iiint _{W}(\nabla \cdot \mathbf {F} )\,\mathrm {d} V=2\iiint _{W}(1+y+z)\,\mathrm {d} V=2\iiint _{W}\mathrm {d} V+2\iiint _{W}y\,\mathrm {d} V+2\iiint _{W}z\,\mathrm {d} V,}

where W is the unit ball:

W = { ( x , y , z ) R 3   :   x 2 + y 2 + z 2 1 } . {\displaystyle W=\left\{(x,y,z)\in \mathbb {R} ^{3}\ :\ x^{2}+y^{2}+z^{2}\leq 1\right\}.}

Since the function y is positive in one hemisphere of W and negative in the other, in an equal and opposite way, its total integral over W is zero. The same is true for z:

W y d V = W z d V = 0. {\displaystyle \iiint _{W}y\,\mathrm {d} V=\iiint _{W}z\,\mathrm {d} V=0.}

Therefore,

\oiint S {\displaystyle \scriptstyle S} F n d S = 2 W d V = 8 π 3 , {\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S=2\iiint _{W}\,dV={\frac {8\pi }{3}},}

because the unit ball W has volume ⁠4π/3⁠.

Applications

Differential and integral forms of physical laws

As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples are Gauss's law (in electrostatics), Gauss's law for magnetism, and Gauss's law for gravity.

Continuity equations

Main article: continuity equation

Continuity equations offer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. In fluid dynamics, electromagnetism, quantum mechanics, relativity theory, and a number of other fields, there are continuity equations that describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution of sources or sinks of that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux).

Inverse-square laws

Any inverse-square law can instead be written in a Gauss's law-type form (with a differential and integral form, as described above). Two examples are Gauss's law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss's law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details.

History

Joseph-Louis Lagrange introduced the notion of surface integrals in 1760 and again in more general terms in 1811, in the second edition of his Mécanique Analytique. Lagrange employed surface integrals in his work on fluid mechanics. He discovered the divergence theorem in 1762.

Carl Friedrich Gauss was also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem. He proved additional special cases in 1833 and 1839. But it was Mikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow. Special cases were proven by George Green in 1828 in An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Siméon Denis Poisson in 1824 in a paper on elasticity, and Frédéric Sarrus in 1828 in his work on floating bodies.

Worked examples

Example 1

To verify the planar variant of the divergence theorem for a region R {\displaystyle R} :

R = { ( x , y ) R 2   :   x 2 + y 2 1 } , {\displaystyle R=\left\{(x,y)\in \mathbb {R} ^{2}\ :\ x^{2}+y^{2}\leq 1\right\},}

and the vector field:

F ( x , y ) = 2 y i + 5 x j . {\displaystyle \mathbf {F} (x,y)=2y\mathbf {i} +5x\mathbf {j} .}

The boundary of R {\displaystyle R} is the unit circle, C {\displaystyle C} , that can be represented parametrically by:

x = cos ( s ) , y = sin ( s ) {\displaystyle x=\cos(s),\quad y=\sin(s)}

such that 0 s 2 π {\displaystyle 0\leq s\leq 2\pi } where s {\displaystyle s} units is the length arc from the point s = 0 {\displaystyle s=0} to the point P {\displaystyle P} on C {\displaystyle C} . Then a vector equation of C {\displaystyle C} is

C ( s ) = cos ( s ) i + sin ( s ) j . {\displaystyle C(s)=\cos(s)\mathbf {i} +\sin(s)\mathbf {j} .}

At a point P {\displaystyle P} on C {\displaystyle C} :

P = ( cos ( s ) , sin ( s ) ) F = 2 sin ( s ) i + 5 cos ( s ) j . {\displaystyle P=(\cos(s),\,\sin(s))\,\Rightarrow \,\mathbf {F} =2\sin(s)\mathbf {i} +5\cos(s)\mathbf {j} .}

Therefore,

C F n d s = 0 2 π ( 2 sin ( s ) i + 5 cos ( s ) j ) ( cos ( s ) i + sin ( s ) j ) d s = 0 2 π ( 2 sin ( s ) cos ( s ) + 5 sin ( s ) cos ( s ) ) d s = 7 0 2 π sin ( s ) cos ( s ) d s = 0. {\displaystyle {\begin{aligned}\oint _{C}\mathbf {F} \cdot \mathbf {n} \,\mathrm {d} s&=\int _{0}^{2\pi }(2\sin(s)\mathbf {i} +5\cos(s)\mathbf {j} )\cdot (\cos(s)\mathbf {i} +\sin(s)\mathbf {j} )\,\mathrm {d} s\\&=\int _{0}^{2\pi }(2\sin(s)\cos(s)+5\sin(s)\cos(s))\,\mathrm {d} s\\&=7\int _{0}^{2\pi }\sin(s)\cos(s)\,\mathrm {d} s\\&=0.\end{aligned}}}

Because M = R e ( F ) = 2 y {\displaystyle M={\mathfrak {Re}}(\mathbf {F} )=2y} , we can evaluate M x = 0 {\displaystyle {\frac {\partial M}{\partial x}}=0} , and because N = I m ( F ) = 5 x {\displaystyle N={\mathfrak {Im}}(\mathbf {F} )=5x} , N y = 0 {\displaystyle {\frac {\partial N}{\partial y}}=0} . Thus

R F d A = R ( M x + N y ) d A = 0. {\displaystyle \iint _{R}\,\mathbf {\nabla } \cdot \mathbf {F} \,\mathrm {d} A=\iint _{R}\left({\frac {\partial M}{\partial x}}+{\frac {\partial N}{\partial y}}\right)\,\mathrm {d} A=0.}

Example 2

Let's say we wanted to evaluate the flux of the following vector field defined by F = 2 x 2 i + 2 y 2 j + 2 z 2 k {\displaystyle \mathbf {F} =2x^{2}{\textbf {i}}+2y^{2}{\textbf {j}}+2z^{2}{\textbf {k}}} bounded by the following inequalities:

{ 0 x 3 } , { 2 y 2 } , { 0 z 2 π } {\displaystyle \left\{0\leq x\leq 3\right\},\left\{-2\leq y\leq 2\right\},\left\{0\leq z\leq 2\pi \right\}}

By the divergence theorem,

V ( F ) d V = {\displaystyle \iiint _{V}\left(\mathbf {\nabla } \cdot \mathbf {F} \right)\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} ( F n ) d S . {\displaystyle (\mathbf {F} \cdot \mathbf {n} )\,\mathrm {d} S.}

We now need to determine the divergence of F {\displaystyle {\textbf {F}}} . If F {\displaystyle \mathbf {F} } is a three-dimensional vector field, then the divergence of F {\displaystyle {\textbf {F}}} is given by F = ( x i + y j + z k ) F {\textstyle \nabla \cdot {\textbf {F}}=\left({\frac {\partial }{\partial x}}{\textbf {i}}+{\frac {\partial }{\partial y}}{\textbf {j}}+{\frac {\partial }{\partial z}}{\textbf {k}}\right)\cdot {\textbf {F}}} .

Thus, we can set up the following flux integral I = {\displaystyle I=} \oiint S {\displaystyle {\scriptstyle S}} F n d S , {\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S,} as follows:

I = V F d V = V ( F x x + F y y + F z z ) d V = V ( 4 x + 4 y + 4 z ) d V = 0 3 2 2 0 2 π ( 4 x + 4 y + 4 z ) d V {\displaystyle {\begin{aligned}I&=\iiint _{V}\nabla \cdot \mathbf {F} \,\mathrm {d} V\\&=\iiint _{V}\left({\frac {\partial \mathbf {F_{x}} }{\partial x}}+{\frac {\partial \mathbf {F_{y}} }{\partial y}}+{\frac {\partial \mathbf {F_{z}} }{\partial z}}\right)\mathrm {d} V\\&=\iiint _{V}(4x+4y+4z)\,\mathrm {d} V\\&=\int _{0}^{3}\int _{-2}^{2}\int _{0}^{2\pi }(4x+4y+4z)\,\mathrm {d} V\end{aligned}}}

Now that we have set up the integral, we can evaluate it.

0 3 2 2 0 2 π ( 4 x + 4 y + 4 z ) d V = 2 2 0 2 π ( 12 y + 12 z + 18 ) d y d z = 0 2 π 24 ( 2 z + 3 ) d z = 48 π ( 2 π + 3 ) {\displaystyle {\begin{aligned}\int _{0}^{3}\int _{-2}^{2}\int _{0}^{2\pi }(4x+4y+4z)\,\mathrm {d} V&=\int _{-2}^{2}\int _{0}^{2\pi }(12y+12z+18)\,\mathrm {d} y\,\mathrm {d} z\\&=\int _{0}^{2\pi }24(2z+3)\,\mathrm {d} z\\&=48\pi (2\pi +3)\end{aligned}}}

Generalizations

Multiple dimensions

One can use the generalised Stokes' theorem to equate the n-dimensional volume integral of the divergence of a vector field F over a region U to the (n − 1)-dimensional surface integral of F over the boundary of U:

U n F d V = U n 1 F n d S {\displaystyle \underbrace {\int \cdots \int _{U}} _{n}\nabla \cdot \mathbf {F} \,\mathrm {d} V=\underbrace {\oint _{}\cdots \oint _{\partial U}} _{n-1}\mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S}

This equation is also known as the divergence theorem.

When n = 2, this is equivalent to Green's theorem.

When n = 1, it reduces to the fundamental theorem of calculus, part 2.

Tensor fields

Main article: Tensor field

Writing the theorem in Einstein notation:

V F i x i d V = {\displaystyle \iiint _{V}{\dfrac {\partial \mathbf {F} _{i}}{\partial x_{i}}}\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} F i n i d S {\displaystyle \mathbf {F} _{i}n_{i}\,\mathrm {d} S}

suggestively, replacing the vector field F with a rank-n tensor field T, this can be generalized to:

V T i 1 i 2 i q i n x i q d V = {\displaystyle \iiint _{V}{\dfrac {\partial T_{i_{1}i_{2}\cdots i_{q}\cdots i_{n}}}{\partial x_{i_{q}}}}\mathrm {d} V=} \oiint S {\displaystyle \scriptstyle S} T i 1 i 2 i q i n n i q d S . {\displaystyle T_{i_{1}i_{2}\cdots i_{q}\cdots i_{n}}n_{i_{q}}\,\mathrm {d} S.}

where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity).

See also

References

  1. Katz, Victor J. (1979). "The history of Stokes's theorem". Mathematics Magazine. 52 (3): 146–156. doi:10.2307/2690275. JSTOR 2690275. reprinted in Anderson, Marlow (2009). Who Gave You the Epsilon?: And Other Tales of Mathematical History. Mathematical Association of America. pp. 78–79. ISBN 978-0-88385-569-0.
  2. R. G. Lerner; G. L. Trigg (1994). Encyclopaedia of Physics (2nd ed.). VHC. ISBN 978-3-527-26954-9.
  3. Byron, Frederick; Fuller, Robert (1992), Mathematics of Classical and Quantum Physics, Dover Publications, p. 22, ISBN 978-0-486-67164-2
  4. Wiley, C. Ray Jr. Advanced Engineering Mathematics, 3rd Ed. McGraw-Hill. pp. 372–373.
  5. Kreyszig, Erwin; Kreyszig, Herbert; Norminton, Edward J. (2011). Advanced Engineering Mathematics (10 ed.). John Wiley and Sons. pp. 453–456. ISBN 978-0-470-45836-5.
  6. Benford, Frank A. (May 2007). "Notes on Vector Calculus" (PDF). Course materials for Math 105: Multivariable Calculus. Prof. Steven Miller's webpage, Williams College. Retrieved 14 March 2022.
  7. ^ Purcell, Edward M.; David J. Morin (2013). Electricity and Magnetism. Cambridge Univ. Press. pp. 56–58. ISBN 978-1-107-01402-2.
  8. Alt, Hans Wilhelm (2016). "Linear Functional Analysis". Universitext. London: Springer London. pp. 259–261, 270–272. doi:10.1007/978-1-4471-7280-2. ISBN 978-1-4471-7279-6. ISSN 0172-5939.
  9. Taylor, Michael E. (2011). "Partial Differential Equations I". Applied Mathematical Sciences. Vol. 115. New York, NY: Springer New York. pp. 178–179. doi:10.1007/978-1-4419-7055-8. ISBN 978-1-4419-7054-1. ISSN 0066-5452.
  10. M. R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis. Schaum's Outlines (2nd ed.). USA: McGraw Hill. ISBN 978-0-07-161545-7.
  11. ^ MathWorld
  12. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. ISBN 978-0-07-051400-3.
  13. ^ Katz, Victor (2009). "Chapter 22: Vector Analysis". A History of Mathematics: An Introduction. Addison-Wesley. pp. 808–9. ISBN 978-0-321-38700-4.
  14. In his 1762 paper on sound, Lagrange treats a special case of the divergence theorem: Lagrange (1762) "Nouvelles recherches sur la nature et la propagation du son" (New researches on the nature and propagation of sound), Miscellanea Taurinensia (also known as: Mélanges de Turin ), 2: 11 – 172. This article is reprinted as: "Nouvelles recherches sur la nature et la propagation du son" in: J.A. Serret, ed., Oeuvres de Lagrange, (Paris, France: Gauthier-Villars, 1867), vol. 1, pages 151–316; on pages 263–265, Lagrange transforms triple integrals into double integrals using integration by parts.
  15. C. F. Gauss (1813) "Theoria attractionis corporum sphaeroidicorum ellipticorum homogeneorum methodo nova tractata," Commentationes societatis regiae scientiarium Gottingensis recentiores, 2: 355–378; Gauss considered a special case of the theorem; see the 4th, 5th, and 6th pages of his article.
  16. ^ Katz, Victor (May 1979). "A History of Stokes' Theorem". Mathematics Magazine. 52 (3): 146–156. doi:10.1080/0025570X.1979.11976770. JSTOR 2690275.
  17. Mikhail Ostragradsky presented his proof of the divergence theorem to the Paris Academy in 1826; however, his work was not published by the Academy. He returned to St. Petersburg, Russia, where in 1828–1829 he read the work that he'd done in France, to the St. Petersburg Academy, which published his work in abbreviated form in 1831.
    • His proof of the divergence theorem – "Démonstration d'un théorème du calcul intégral" (Proof of a theorem in integral calculus) – which he had read to the Paris Academy on February 13, 1826, was translated, in 1965, into Russian together with another article by him. See: Юшкевич А.П. (Yushkevich A.P.) and Антропова В.И. (Antropov V.I.) (1965) "Неопубликованные работы М.В. Остроградского" (Unpublished works of MV Ostrogradskii), Историко-математические исследования (Istoriko-Matematicheskie Issledovaniya / Historical-Mathematical Studies), 16: 49–96; see the section titled: "Остроградский М.В. Доказательство одной теоремы интегрального исчисления" (Ostrogradskii M. V. Dokazatelstvo odnoy teoremy integralnogo ischislenia / Ostragradsky M.V. Proof of a theorem in integral calculus).
    • M. Ostrogradsky (presented: November 5, 1828; published: 1831) "Première note sur la théorie de la chaleur" (First note on the theory of heat) Mémoires de l'Académie impériale des sciences de St. Pétersbourg, series 6, 1: 129–133; for an abbreviated version of his proof of the divergence theorem, see pages 130–131.
    • Victor J. Katz (May1979) "The history of Stokes' theorem," Archived April 2, 2015, at the Wayback Machine Mathematics Magazine, 52(3): 146–156; for Ostragradsky's proof of the divergence theorem, see pages 147–148.
  18. George Green, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1838). A form of the "divergence theorem" appears on pages 10–12.
  19. Other early investigators who used some form of the divergence theorem include:
    • Poisson (presented: February 2, 1824; published: 1826) "Mémoire sur la théorie du magnétisme" (Memoir on the theory of magnetism), Mémoires de l'Académie des sciences de l'Institut de France, 5: 247–338; on pages 294–296, Poisson transforms a volume integral (which is used to evaluate a quantity Q) into a surface integral. To make this transformation, Poisson follows the same procedure that is used to prove the divergence theorem.
    • Frédéric Sarrus (1828) "Mémoire sur les oscillations des corps flottans" (Memoir on the oscillations of floating bodies), Annales de mathématiques pures et appliquées (Nismes), 19: 185–211.
  20. K.F. Riley; M.P. Hobson; S.J. Bence (2010). Mathematical methods for physics and engineering. Cambridge University Press. ISBN 978-0-521-86153-3.
  21. see for example:
    J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5. ISBN 978-0-7167-0344-0., and
    R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4.

External links

Calculus
Precalculus
Limits
Differential calculus
Integral calculus
Vector calculus
Multivariable calculus
Sequences and series
Special functions
and numbers
History of calculus
Lists
Integrals
Miscellaneous topics
Category: