Misplaced Pages

Variation of parameters

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Variational methods (physics))
Differential equations
Scope
Fields
Applied mathematics
Social sciences

List of named differential equations
Classification
Types
By variable type
Features
Relation to processes
Solution
Existence and uniqueness
General topics
Solution methods
People
List
Procedure for solving differential equations

In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations.

For first-order inhomogeneous linear differential equations it is usually possible to find solutions via integrating factors or undetermined coefficients with considerably less effort, although those methods leverage heuristics that involve guessing and do not work for all inhomogeneous linear differential equations.

Variation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution equations like the heat equation, wave equation, and vibrating plate equation. In this setting, the method is more often known as Duhamel's principle, named after Jean-Marie Duhamel (1797–1872) who first applied the method to solve the inhomogeneous heat equation. Sometimes variation of parameters itself is called Duhamel's principle and vice versa.

History

The method of variation of parameters was first sketched by the Swiss mathematician Leonhard Euler (1707–1783), and later completed by the Italian-French mathematician Joseph-Louis Lagrange (1736–1813).

A forerunner of the method of variation of a celestial body's orbital elements appeared in Euler's work in 1748, while he was studying the mutual perturbations of Jupiter and Saturn. In his 1749 study of the motions of the earth, Euler obtained differential equations for the orbital elements. In 1753, he applied the method to his study of the motions of the moon.

Lagrange first used the method in 1766. Between 1778 and 1783, he further developed the method in two series of memoirs: one on variations in the motions of the planets and another on determining the orbit of a comet from three observations. During 1808–1810, Lagrange gave the method of variation of parameters its final form in a third series of papers.

Description of method

Given an ordinary non-homogeneous linear differential equation of order n

y ( n ) ( x ) + i = 0 n 1 a i ( x ) y ( i ) ( x ) = b ( x ) . {\displaystyle y^{(n)}(x)+\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}(x)=b(x).} (i)

Let y 1 ( x ) , , y n ( x ) {\displaystyle y_{1}(x),\ldots ,y_{n}(x)} be a basis of the vector space of solutions of the corresponding homogeneous equation

y ( n ) ( x ) + i = 0 n 1 a i ( x ) y ( i ) ( x ) = 0. {\displaystyle y^{(n)}(x)+\sum _{i=0}^{n-1}a_{i}(x)y^{(i)}(x)=0.} (ii)

Then a particular solution to the non-homogeneous equation is given by

y p ( x ) = i = 1 n c i ( x ) y i ( x ) {\displaystyle y_{p}(x)=\sum _{i=1}^{n}c_{i}(x)y_{i}(x)} (iii)

where the c i ( x ) {\displaystyle c_{i}(x)} are differentiable functions which are assumed to satisfy the conditions

i = 1 n c i ( x ) y i ( j ) ( x ) = 0 , j = 0 , , n 2. {\displaystyle \sum _{i=1}^{n}c_{i}'(x)y_{i}^{(j)}(x)=0,\quad j=0,\ldots ,n-2.} (iv)

Starting with (iii), repeated differentiation combined with repeated use of (iv) gives

y p ( j ) ( x ) = i = 1 n c i ( x ) y i ( j ) ( x ) , j = 0 , , n 1 . {\displaystyle y_{p}^{(j)}(x)=\sum _{i=1}^{n}c_{i}(x)y_{i}^{(j)}(x),\quad j=0,\ldots ,n-1\,.} (v)

One last differentiation gives

y p ( n ) ( x ) = i = 1 n c i ( x ) y i ( n 1 ) ( x ) + i = 1 n c i ( x ) y i ( n ) ( x ) . {\displaystyle y_{p}^{(n)}(x)=\sum _{i=1}^{n}c_{i}'(x)y_{i}^{(n-1)}(x)+\sum _{i=1}^{n}c_{i}(x)y_{i}^{(n)}(x)\,.} (vi)

By substituting (iii) into (i) and applying (v) and (vi) it follows that

i = 1 n c i ( x ) y i ( n 1 ) ( x ) = b ( x ) . {\displaystyle \sum _{i=1}^{n}c_{i}'(x)y_{i}^{(n-1)}(x)=b(x).} (vii)

The linear system (iv and vii) of n equations can then be solved using Cramer's rule yielding

c i ( x ) = W i ( x ) W ( x ) , i = 1 , , n {\displaystyle c_{i}'(x)={\frac {W_{i}(x)}{W(x)}},\,\quad i=1,\ldots ,n}

where W ( x ) {\displaystyle W(x)} is the Wronskian determinant of the basis y 1 ( x ) , , y n ( x ) {\displaystyle y_{1}(x),\ldots ,y_{n}(x)} and W i ( x ) {\displaystyle W_{i}(x)} is the Wronskian determinant of the basis with the i-th column replaced by ( 0 , 0 , , b ( x ) ) . {\displaystyle (0,0,\ldots ,b(x)).}

The particular solution to the non-homogeneous equation can then be written as

i = 1 n y i ( x ) W i ( x ) W ( x ) d x . {\displaystyle \sum _{i=1}^{n}y_{i}(x)\,\int {\frac {W_{i}(x)}{W(x)}}\,\mathrm {d} x.}

Intuitive explanation

Consider the equation of the forced dispersionless spring, in suitable units:

x ( t ) + x ( t ) = F ( t ) . {\displaystyle x''(t)+x(t)=F(t).}

Here x is the displacement of the spring from the equilibrium x = 0, and F(t) is an external applied force that depends on time. When the external force is zero, this is the homogeneous equation (whose solutions are linear combinations of sines and cosines, corresponding to the spring oscillating with constant total energy).

We can construct the solution physically, as follows. Between times t = s {\displaystyle t=s} and t = s + d s {\displaystyle t=s+ds} , the momentum corresponding to the solution has a net change F ( s ) d s {\displaystyle F(s)\,ds} (see: Impulse (physics)). A solution to the inhomogeneous equation, at the present time t > 0, is obtained by linearly superposing the solutions obtained in this manner, for s going between 0 and t.

The homogeneous initial-value problem, representing a small impulse F ( s ) d s {\displaystyle F(s)\,ds} being added to the solution at time t = s {\displaystyle t=s} , is

x ( t ) + x ( t ) = 0 , x ( s ) = 0 ,   x ( s ) = F ( s ) d s . {\displaystyle x''(t)+x(t)=0,\quad x(s)=0,\ x'(s)=F(s)\,ds.}

The unique solution to this problem is easily seen to be x ( t ) = F ( s ) sin ( t s ) d s {\displaystyle x(t)=F(s)\sin(t-s)\,ds} . The linear superposition of all of these solutions is given by the integral:

x ( t ) = 0 t F ( s ) sin ( t s ) d s . {\displaystyle x(t)=\int _{0}^{t}F(s)\sin(t-s)\,ds.}

To verify that this satisfies the required equation:

x ( t ) = 0 t F ( s ) cos ( t s ) d s {\displaystyle x'(t)=\int _{0}^{t}F(s)\cos(t-s)\,ds}
x ( t ) = F ( t ) 0 t F ( s ) sin ( t s ) d s = F ( t ) x ( t ) , {\displaystyle x''(t)=F(t)-\int _{0}^{t}F(s)\sin(t-s)\,ds=F(t)-x(t),}

as required (see: Leibniz integral rule).

The general method of variation of parameters allows for solving an inhomogeneous linear equation

L x ( t ) = F ( t ) {\displaystyle Lx(t)=F(t)}

by means of considering the second-order linear differential operator L to be the net force, thus the total impulse imparted to a solution between time s and s+ds is F(s)ds. Denote by x s {\displaystyle x_{s}} the solution of the homogeneous initial value problem

L x ( t ) = 0 , x ( s ) = 0 ,   x ( s ) = F ( s ) d s . {\displaystyle Lx(t)=0,\quad x(s)=0,\ x'(s)=F(s)\,ds.}

Then a particular solution of the inhomogeneous equation is

x ( t ) = 0 t x s ( t ) d s , {\displaystyle x(t)=\int _{0}^{t}x_{s}(t)\,ds,}

the result of linearly superposing the infinitesimal homogeneous solutions. There are generalizations to higher order linear differential operators.

In practice, variation of parameters usually involves the fundamental solution of the homogeneous problem, the infinitesimal solutions x s {\displaystyle x_{s}} then being given in terms of explicit linear combinations of linearly independent fundamental solutions. In the case of the forced dispersionless spring, the kernel sin ( t s ) = sin t cos s sin s cos t {\displaystyle \sin(t-s)=\sin t\cos s-\sin s\cos t} is the associated decomposition into fundamental solutions.

Examples

First-order equation

y + p ( x ) y = q ( x ) {\displaystyle y'+p(x)y=q(x)}

The complementary solution to our original (inhomogeneous) equation is the general solution of the corresponding homogeneous equation (written below):

y + p ( x ) y = 0 {\displaystyle y'+p(x)y=0}

This homogeneous differential equation can be solved by different methods, for example separation of variables:

d d x y + p ( x ) y = 0 {\displaystyle {\frac {d}{dx}}y+p(x)y=0}
d y d x = p ( x ) y {\displaystyle {\frac {dy}{dx}}=-p(x)y}
d y y = p ( x ) d x , {\displaystyle {dy \over y}=-{p(x)\,dx},}
1 y d y = p ( x ) d x {\displaystyle \int {\frac {1}{y}}\,dy=-\int p(x)\,dx}
ln | y | = p ( x ) d x + C {\displaystyle \ln |y|=-\int p(x)\,dx+C}
y = ± e p ( x ) d x + C = C 0 e p ( x ) d x {\displaystyle y=\pm e^{-\int p(x)\,dx+C}=C_{0}e^{-\int p(x)\,dx}}

The complementary solution to our original equation is therefore:

y c = C 0 e p ( x ) d x {\displaystyle y_{c}=C_{0}e^{-\int p(x)\,dx}}

Now we return to solving the non-homogeneous equation:

y + p ( x ) y = q ( x ) {\displaystyle y'+p(x)y=q(x)}

Using the method variation of parameters, the particular solution is formed by multiplying the complementary solution by an unknown function C(x):

y p = C ( x ) e p ( x ) d x {\displaystyle y_{p}=C(x)e^{-\int p(x)\,dx}}

By substituting the particular solution into the non-homogeneous equation, we can find C(x):

C ( x ) e p ( x ) d x C ( x ) p ( x ) e p ( x ) d x + p ( x ) C ( x ) e p ( x ) d x = q ( x ) {\displaystyle C'(x)e^{-\int p(x)\,dx}-C(x)p(x)e^{-\int p(x)\,dx}+p(x)C(x)e^{-\int p(x)\,dx}=q(x)}
C ( x ) e p ( x ) d x = q ( x ) {\displaystyle C'(x)e^{-\int p(x)\,dx}=q(x)}
C ( x ) = q ( x ) e p ( x ) d x {\displaystyle C'(x)=q(x)e^{\int p(x)\,dx}}
C ( x ) = q ( x ) e p ( x ) d x d x + C 1 {\displaystyle C(x)=\int q(x)e^{\int p(x)\,dx}\,dx+C_{1}}

We only need a single particular solution, so we arbitrarily select C 1 = 0 {\displaystyle C_{1}=0} for simplicity. Therefore the particular solution is:

y p = e p ( x ) d x q ( x ) e p ( x ) d x d x {\displaystyle y_{p}=e^{-\int p(x)\,dx}\int q(x)e^{\int p(x)\,dx}\,dx}

The final solution of the differential equation is:

y = y c + y p = C 0 e p ( x ) d x + e p ( x ) d x q ( x ) e p ( x ) d x d x {\displaystyle {\begin{aligned}y&=y_{c}+y_{p}\\&=C_{0}e^{-\int p(x)\,dx}+e^{-\int p(x)\,dx}\int q(x)e^{\int p(x)\,dx}\,dx\end{aligned}}}

This recreates the method of integrating factors.

Specific second-order equation

Let us solve

y + 4 y + 4 y = cosh x {\displaystyle y''+4y'+4y=\cosh x}

We want to find the general solution to the differential equation, that is, we want to find solutions to the homogeneous differential equation

y + 4 y + 4 y = 0. {\displaystyle y''+4y'+4y=0.}

The characteristic equation is:

λ 2 + 4 λ + 4 = ( λ + 2 ) 2 = 0 {\displaystyle \lambda ^{2}+4\lambda +4=(\lambda +2)^{2}=0}

Since λ = 2 {\displaystyle \lambda =-2} is a repeated root, we have to introduce a factor of x for one solution to ensure linear independence: u 1 = e 2 x {\displaystyle u_{1}=e^{-2x}} and u 2 = x e 2 x {\displaystyle u_{2}=xe^{-2x}} . The Wronskian of these two functions is

W = | e 2 x x e 2 x 2 e 2 x e 2 x ( 2 x 1 ) | = e 2 x e 2 x ( 2 x 1 ) + 2 x e 2 x e 2 x = e 4 x . {\displaystyle W={\begin{vmatrix}e^{-2x}&xe^{-2x}\\-2e^{-2x}&-e^{-2x}(2x-1)\\\end{vmatrix}}=-e^{-2x}e^{-2x}(2x-1)+2xe^{-2x}e^{-2x}=e^{-4x}.}

Because the Wronskian is non-zero, the two functions are linearly independent, so this is in fact the general solution for the homogeneous differential equation (and not a mere subset of it).

We seek functions A(x) and B(x) so A(x)u1 + B(x)u2 is a particular solution of the non-homogeneous equation. We need only calculate the integrals

A ( x ) = 1 W u 2 ( x ) b ( x ) d x , B ( x ) = 1 W u 1 ( x ) b ( x ) d x {\displaystyle A(x)=-\int {1 \over W}u_{2}(x)b(x)\,\mathrm {d} x,\;B(x)=\int {1 \over W}u_{1}(x)b(x)\,\mathrm {d} x}

Recall that for this example

b ( x ) = cosh x {\displaystyle b(x)=\cosh x}

That is,

A ( x ) = 1 e 4 x x e 2 x cosh x d x = x e 2 x cosh x d x = 1 18 e x ( 9 ( x 1 ) + e 2 x ( 3 x 1 ) ) + C 1 {\displaystyle A(x)=-\int {1 \over e^{-4x}}xe^{-2x}\cosh x\,\mathrm {d} x=-\int xe^{2x}\cosh x\,\mathrm {d} x=-{1 \over 18}e^{x}\left(9(x-1)+e^{2x}(3x-1)\right)+C_{1}}
B ( x ) = 1 e 4 x e 2 x cosh x d x = e 2 x cosh x d x = 1 6 e x ( 3 + e 2 x ) + C 2 {\displaystyle B(x)=\int {1 \over e^{-4x}}e^{-2x}\cosh x\,\mathrm {d} x=\int e^{2x}\cosh x\,\mathrm {d} x={1 \over 6}e^{x}\left(3+e^{2x}\right)+C_{2}}

where C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} are constants of integration.

General second-order equation

We have a differential equation of the form

u + p ( x ) u + q ( x ) u = f ( x ) {\displaystyle u''+p(x)u'+q(x)u=f(x)}

and we define the linear operator

L = D 2 + p ( x ) D + q ( x ) {\displaystyle L=D^{2}+p(x)D+q(x)}

where D represents the differential operator. We therefore have to solve the equation L u ( x ) = f ( x ) {\displaystyle Lu(x)=f(x)} for u ( x ) {\displaystyle u(x)} , where L {\displaystyle L} and f ( x ) {\displaystyle f(x)} are known.

We must solve first the corresponding homogeneous equation:

u + p ( x ) u + q ( x ) u = 0 {\displaystyle u''+p(x)u'+q(x)u=0}

by the technique of our choice. Once we've obtained two linearly independent solutions to this homogeneous differential equation (because this ODE is second-order) — call them u1 and u2 — we can proceed with variation of parameters.

Now, we seek the general solution to the differential equation u G ( x ) {\displaystyle u_{G}(x)} which we assume to be of the form

u G ( x ) = A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) . {\displaystyle u_{G}(x)=A(x)u_{1}(x)+B(x)u_{2}(x).}

Here, A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} are unknown and u 1 ( x ) {\displaystyle u_{1}(x)} and u 2 ( x ) {\displaystyle u_{2}(x)} are the solutions to the homogeneous equation. (Observe that if A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} are constants, then L u G ( x ) = 0 {\displaystyle Lu_{G}(x)=0} .) Since the above is only one equation and we have two unknown functions, it is reasonable to impose a second condition. We choose the following:

A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) = 0. {\displaystyle A'(x)u_{1}(x)+B'(x)u_{2}(x)=0.}

Now,

u G ( x ) = ( A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) ) = ( A ( x ) u 1 ( x ) ) + ( B ( x ) u 2 ( x ) ) = A ( x ) u 1 ( x ) + A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) + B ( x ) u 2 ( x ) = A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) + A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) = A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) {\displaystyle {\begin{aligned}u_{G}'(x)&=\left(A(x)u_{1}(x)+B(x)u_{2}(x)\right)'\\&=\left(A(x)u_{1}(x)\right)'+\left(B(x)u_{2}(x)\right)'\\&=A'(x)u_{1}(x)+A(x)u_{1}'(x)+B'(x)u_{2}(x)+B(x)u_{2}'(x)\\&=A'(x)u_{1}(x)+B'(x)u_{2}(x)+A(x)u_{1}'(x)+B(x)u_{2}'(x)\\&=A(x)u_{1}'(x)+B(x)u_{2}'(x)\end{aligned}}}

Differentiating again (omitting intermediary steps)

u G ( x ) = A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) + A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) . {\displaystyle u_{G}''(x)=A(x)u_{1}''(x)+B(x)u_{2}''(x)+A'(x)u_{1}'(x)+B'(x)u_{2}'(x).}

Now we can write the action of L upon uG as

L u G = A ( x ) L u 1 ( x ) + B ( x ) L u 2 ( x ) + A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) . {\displaystyle Lu_{G}=A(x)Lu_{1}(x)+B(x)Lu_{2}(x)+A'(x)u_{1}'(x)+B'(x)u_{2}'(x).}

Since u1 and u2 are solutions, then

L u G = A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) . {\displaystyle Lu_{G}=A'(x)u_{1}'(x)+B'(x)u_{2}'(x).}

We have the system of equations

[ u 1 ( x ) u 2 ( x ) u 1 ( x ) u 2 ( x ) ] [ A ( x ) B ( x ) ] = [ 0 f ] . {\displaystyle {\begin{bmatrix}u_{1}(x)&u_{2}(x)\\u_{1}'(x)&u_{2}'(x)\end{bmatrix}}{\begin{bmatrix}A'(x)\\B'(x)\end{bmatrix}}={\begin{bmatrix}0\\f\end{bmatrix}}.}

Expanding,

[ A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) ] = [ 0 f ] . {\displaystyle {\begin{bmatrix}A'(x)u_{1}(x)+B'(x)u_{2}(x)\\A'(x)u_{1}'(x)+B'(x)u_{2}'(x)\end{bmatrix}}={\begin{bmatrix}0\\f\end{bmatrix}}.}

So the above system determines precisely the conditions

A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) = 0. {\displaystyle A'(x)u_{1}(x)+B'(x)u_{2}(x)=0.}
A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) = L u G = f . {\displaystyle A'(x)u_{1}'(x)+B'(x)u_{2}'(x)=Lu_{G}=f.}

We seek A(x) and B(x) from these conditions, so, given

[ u 1 ( x ) u 2 ( x ) u 1 ( x ) u 2 ( x ) ] [ A ( x ) B ( x ) ] = [ 0 f ] {\displaystyle {\begin{bmatrix}u_{1}(x)&u_{2}(x)\\u_{1}'(x)&u_{2}'(x)\end{bmatrix}}{\begin{bmatrix}A'(x)\\B'(x)\end{bmatrix}}={\begin{bmatrix}0\\f\end{bmatrix}}}

we can solve for (A′(x), B′(x)), so

[ A ( x ) B ( x ) ] = [ u 1 ( x ) u 2 ( x ) u 1 ( x ) u 2 ( x ) ] 1 [ 0 f ] = 1 W [ u 2 ( x ) u 2 ( x ) u 1 ( x ) u 1 ( x ) ] [ 0 f ] , {\displaystyle {\begin{bmatrix}A'(x)\\B'(x)\end{bmatrix}}={\begin{bmatrix}u_{1}(x)&u_{2}(x)\\u_{1}'(x)&u_{2}'(x)\end{bmatrix}}^{-1}{\begin{bmatrix}0\\f\end{bmatrix}}={\frac {1}{W}}{\begin{bmatrix}u_{2}'(x)&-u_{2}(x)\\-u_{1}'(x)&u_{1}(x)\end{bmatrix}}{\begin{bmatrix}0\\f\end{bmatrix}},}

where W denotes the Wronskian of u1 and u2. (We know that W is nonzero, from the assumption that u1 and u2 are linearly independent.) So,

A ( x ) = 1 W u 2 ( x ) f ( x ) , B ( x ) = 1 W u 1 ( x ) f ( x ) A ( x ) = 1 W u 2 ( x ) f ( x ) d x , B ( x ) = 1 W u 1 ( x ) f ( x ) d x {\displaystyle {\begin{aligned}A'(x)&=-{1 \over W}u_{2}(x)f(x),&B'(x)&={1 \over W}u_{1}(x)f(x)\\A(x)&=-\int {1 \over W}u_{2}(x)f(x)\,\mathrm {d} x,&B(x)&=\int {1 \over W}u_{1}(x)f(x)\,\mathrm {d} x\end{aligned}}}

While homogeneous equations are relatively easy to solve, this method allows the calculation of the coefficients of the general solution of the inhomogeneous equation, and thus the complete general solution of the inhomogeneous equation can be determined.

Note that A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} are each determined only up to an arbitrary additive constant (the constant of integration). Adding a constant to A ( x ) {\displaystyle A(x)} or B ( x ) {\displaystyle B(x)} does not change the value of L u G ( x ) {\displaystyle Lu_{G}(x)} because the extra term is just a linear combination of u1 and u2, which is a solution of L {\displaystyle L} by definition.

See also

Notes

  1. See:
  2. Euler, L. (1748) "Recherches sur la question des inégalités du mouvement de Saturne et de Jupiter, sujet proposé pour le prix de l'année 1748, par l’Académie Royale des Sciences de Paris" (Paris, France: G. Martin, J.B. Coignard, & H.L. Guerin, 1749).
  3. Euler, L. (1749) "Recherches sur la précession des équinoxes, et sur la nutation de l’axe de la terre," Histoire de l'Académie Royale des Sciences et Belles-lettres (Berlin), pages 289–325 .
  4. Euler, L. (1753) Theoria motus lunae: exhibens omnes ejus inaequalitates ... (Saint Petersburg, Russia: Academia Imperialis Scientiarum Petropolitanae , 1753).
  5. Lagrange, J.-L. (1766) “Solution de différens problèmes du calcul integral,” Mélanges de philosophie et de mathématique de la Société royale de Turin, vol. 3, pages 179–380.
  6. See:
  7. See:
  8. See:
    • Lagrange, J.-L. (1808) “Sur la théorie des variations des éléments des planètes et en particulier des variations des grands axes de leurs orbites,” Mémoires de la première Classe de l’Institut de France. Reprinted in: Joseph-Louis Lagrange with Joseph-Alfred Serret, ed., Oeuvres de Lagrange (Paris, France: Gauthier-Villars, 1873), vol. 6, pages 713–768.
    • Lagrange, J.-L. (1809) “Sur la théorie générale de la variation des constantes arbitraires dans tous les problèmes de la méchanique,” Mémoires de la première Classe de l’Institut de France. Reprinted in: Joseph-Louis Lagrange with Joseph-Alfred Serret, ed., Oeuvres de Lagrange (Paris, France: Gauthier-Villars, 1873), vol. 6, pages 771–805.
    • Lagrange, J.-L. (1810) “Second mémoire sur la théorie générale de la variation des constantes arbitraires dans tous les problèmes de la méchanique, ... ,” Mémoires de la première Classe de l’Institut de France. Reprinted in: Joseph-Louis Lagrange with Joseph-Alfred Serret, ed., Oeuvres de Lagrange (Paris, France: Gauthier-Villars, 1873), vol. 6, pages 809–816.

References

External links

Category: