Misplaced Pages

Darwin–Fowler method

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Method for deriving the distribution functions with mean probability

In statistical mechanics, the Darwin–Fowler method is used for deriving the distribution functions with mean probability. It was developed by Charles Galton Darwin and Ralph H. Fowler in 1922–1923.

Distribution functions are used in statistical physics to estimate the mean number of particles occupying an energy level (hence also called occupation numbers). These distributions are mostly derived as those numbers for which the system under consideration is in its state of maximum probability. But one really requires average numbers. These average numbers can be obtained by the Darwin–Fowler method. Of course, for systems in the thermodynamic limit (large number of particles), as in statistical mechanics, the results are the same as with maximization.

Darwin–Fowler method

In most texts on statistical mechanics the statistical distribution functions f {\displaystyle f} in Maxwell–Boltzmann statistics, Bose–Einstein statistics, Fermi–Dirac statistics) are derived by determining those for which the system is in its state of maximum probability. But one really requires those with average or mean probability, although – of course – the results are usually the same for systems with a huge number of elements, as is the case in statistical mechanics. The method for deriving the distribution functions with mean probability has been developed by C. G. Darwin and Fowler and is therefore known as the Darwin–Fowler method. This method is the most reliable general procedure for deriving statistical distribution functions. Since the method employs a selector variable (a factor introduced for each element to permit a counting procedure) the method is also known as the Darwin–Fowler method of selector variables. Note that a distribution function is not the same as the probability – cf. Maxwell–Boltzmann distribution, Bose–Einstein distribution, Fermi–Dirac distribution. Also note that the distribution function f i {\displaystyle f_{i}} which is a measure of the fraction of those states which are actually occupied by elements, is given by f i = n i / g i {\displaystyle f_{i}=n_{i}/g_{i}} or n i = f i g i {\displaystyle n_{i}=f_{i}g_{i}} , where g i {\displaystyle g_{i}} is the degeneracy of energy level i {\displaystyle i} of energy ε i {\displaystyle \varepsilon _{i}} and n i {\displaystyle n_{i}} is the number of elements occupying this level (e.g. in Fermi–Dirac statistics 0 or 1). Total energy E {\displaystyle E} and total number of elements N {\displaystyle N} are then given by E = i n i ε i {\displaystyle E=\sum _{i}n_{i}\varepsilon _{i}} and N = n i {\displaystyle N=\sum n_{i}} .

The Darwin–Fowler method has been treated in the texts of E. Schrödinger, Fowler and Fowler and E. A. Guggenheim, of K. Huang, and of H. J. W. Müller–Kirsten. The method is also discussed and used for the derivation of Bose–Einstein condensation in the book of R. B. Dingle.

Classical statistics

For N = i n i {\displaystyle N=\sum _{i}n_{i}} independent elements with n i {\displaystyle n_{i}} on level with energy ε i {\displaystyle \varepsilon _{i}} and E = i n i ε i {\displaystyle E=\sum _{i}n_{i}\varepsilon _{i}} for a canonical system in a heat bath with temperature T {\displaystyle T} we set

Z = arrangements e E / k T = arrangements i z i n i , z i = e ε i / k T . {\displaystyle Z=\sum _{\text{arrangements}}e^{-E/kT}=\sum _{\text{arrangements}}\prod _{i}z_{i}^{n_{i}},\;\;\;z_{i}=e^{-\varepsilon _{i}/kT}.}

The average over all arrangements is the mean occupation number

( n i ) av = j n j Z Z = z j z j ln Z . {\displaystyle (n_{i})_{\text{av}}={\frac {\sum _{j}n_{j}Z}{Z}}=z_{j}{\frac {\partial }{\partial z_{j}}}\ln Z.}

Insert a selector variable ω {\displaystyle \omega } by setting

Z ω = i ( ω z i ) n i . {\displaystyle Z_{\omega }=\sum \prod _{i}(\omega z_{i})^{n_{i}}.}

In classical statistics the N {\displaystyle N} elements are (a) distinguishable and can be arranged with packets of n i {\displaystyle n_{i}} elements on level ε i {\displaystyle \varepsilon _{i}} whose number is

N ! i n i ! , {\displaystyle {\frac {N!}{\prod _{i}n_{i}!}},}

so that in this case

Z ω = N ! n i i ( ω z i ) n i n i ! . {\displaystyle Z_{\omega }=N!\sum _{n_{i}}\prod _{i}{\frac {(\omega z_{i})^{n_{i}}}{n_{i}!}}.}

Allowing for (b) the degeneracy g i {\displaystyle g_{i}} of level ε i {\displaystyle \varepsilon _{i}} this expression becomes

Z ω = N ! i = 1 ( n i = 0 , 1 , 2 , ( ω z i ) n i n i ! ) g i = N ! e ω i g i z i . {\displaystyle Z_{\omega }=N!\prod _{i=1}^{\infty }\left(\sum _{n_{i}=0,1,2,\ldots }{\frac {(\omega z_{i})^{n_{i}}}{n_{i}!}}\right)^{g_{i}}=N!e^{\omega \sum _{i}g_{i}z_{i}}.}

The selector variable ω {\displaystyle \omega } allows one to pick out the coefficient of ω N {\displaystyle \omega ^{N}} which is Z {\displaystyle Z} . Thus

Z = ( i g i z i ) N , {\displaystyle Z=\left(\sum _{i}g_{i}z_{i}\right)^{N},}

and hence

( n j ) av = z j z j ln Z = N g j e ε j / k T i g i e ε i / k T . {\displaystyle (n_{j})_{\text{av}}=z_{j}{\frac {\partial }{\partial z_{j}}}\ln Z=N{\frac {g_{j}e^{-\varepsilon _{j}/kT}}{\sum _{i}g_{i}e^{-\varepsilon _{i}/kT}}}.}

This result which agrees with the most probable value obtained by maximization does not involve a single approximation and is therefore exact, and thus demonstrates the power of this Darwin–Fowler method.

Quantum statistics

We have as above

Z ω = ( ω z i ) n i , z i = e ε i / k T , {\displaystyle Z_{\omega }=\sum \prod (\omega z_{i})^{n_{i}},\;\;z_{i}=e^{-\varepsilon _{i}/kT},}

where n i {\displaystyle n_{i}} is the number of elements in energy level ε i {\displaystyle \varepsilon _{i}} . Since in quantum statistics elements are indistinguishable no preliminary calculation of the number of ways of dividing elements into packets n 1 , n 2 , n 3 , . . . {\displaystyle n_{1},n_{2},n_{3},...} is required. Therefore the sum {\displaystyle \sum } refers only to the sum over possible values of n i {\displaystyle n_{i}} .

In the case of Fermi–Dirac statistics we have

n i = 0 {\displaystyle n_{i}=0} or n i = 1 {\displaystyle n_{i}=1}

per state. There are g i {\displaystyle g_{i}} states for energy level ε i {\displaystyle \varepsilon _{i}} . Hence we have

Z ω = ( 1 + ω z 1 ) g 1 ( 1 + ω z 2 ) g 2 = ( 1 + ω z i ) g i . {\displaystyle Z_{\omega }=(1+\omega z_{1})^{g_{1}}(1+\omega z_{2})^{g_{2}}\cdots =\prod (1+\omega z_{i})^{g_{i}}.}

In the case of Bose–Einstein statistics we have

n i = 0 , 1 , 2 , 3 , . {\displaystyle n_{i}=0,1,2,3,\ldots \infty .}

By the same procedure as before we obtain in the present case

Z ω = ( 1 + ω z 1 + ( ω z 1 ) 2 + ( ω z 1 ) 3 + ) g 1 ( 1 + ω z 2 + ( ω z 2 ) 2 + ) g 2 . {\displaystyle Z_{\omega }=(1+\omega z_{1}+(\omega z_{1})^{2}+(\omega z_{1})^{3}+\cdots )^{g_{1}}(1+\omega z_{2}+(\omega z_{2})^{2}+\cdots )^{g_{2}}\cdots .}

But

1 + ω z 1 + ( ω z 1 ) 2 + = 1 ( 1 ω z 1 ) . {\displaystyle 1+\omega z_{1}+(\omega z_{1})^{2}+\cdots ={\frac {1}{(1-\omega z_{1})}}.}

Therefore

Z ω = i ( 1 ω z i ) g i . {\displaystyle Z_{\omega }=\prod _{i}(1-\omega z_{i})^{-g_{i}}.}

Summarizing both cases and recalling the definition of Z {\displaystyle Z} , we have that Z {\displaystyle Z} is the coefficient of ω N {\displaystyle \omega ^{N}} in

Z ω = i ( 1 ± ω z i ) ± g i , {\displaystyle Z_{\omega }=\prod _{i}(1\pm \omega z_{i})^{\pm g_{i}},}

where the upper signs apply to Fermi–Dirac statistics, and the lower signs to Bose–Einstein statistics.

Next we have to evaluate the coefficient of ω N {\displaystyle \omega ^{N}} in Z ω . {\displaystyle Z_{\omega }.} In the case of a function ϕ ( ω ) {\displaystyle \phi (\omega )} which can be expanded as

ϕ ( ω ) = a 0 + a 1 ω + a 2 ω 2 + , {\displaystyle \phi (\omega )=a_{0}+a_{1}\omega +a_{2}\omega ^{2}+\cdots ,}

the coefficient of ω N {\displaystyle \omega ^{N}} is, with the help of the residue theorem of Cauchy,

a N = 1 2 π i ϕ ( ω ) d ω ω N + 1 . {\displaystyle a_{N}={\frac {1}{2\pi i}}\oint {\frac {\phi (\omega )d\omega }{\omega ^{N+1}}}.}

We note that similarly the coefficient Z {\displaystyle Z} in the above can be obtained as

Z = 1 2 π i Z ω ω N + 1 d ω 1 2 π i e f ( ω ) d ω , {\displaystyle Z={\frac {1}{2\pi i}}\oint {\frac {Z_{\omega }}{\omega ^{N+1}}}d\omega \equiv {\frac {1}{2\pi i}}\int e^{f(\omega )}d\omega ,}

where

f ( ω ) = ± i g i ln ( 1 ± ω z i ) ( N + 1 ) ln ω . {\displaystyle f(\omega )=\pm \sum _{i}g_{i}\ln(1\pm \omega z_{i})-(N+1)\ln \omega .}

Differentiating one obtains

f ( ω ) = 1 ω [ i g i ( ω z i ) 1 ± 1 ( N + 1 ) ] , {\displaystyle f'(\omega )={\frac {1}{\omega }}\left,}

and

f ( ω ) = N + 1 ω 2 1 ω 2 i g i [ ( ω z i ) 1 ± 1 ] 2 . {\displaystyle f''(\omega )={\frac {N+1}{\omega ^{2}}}\mp {\frac {1}{\omega ^{2}}}\sum _{i}{\frac {g_{i}}{^{2}}}.}

One now evaluates the first and second derivatives of f ( ω ) {\displaystyle f(\omega )} at the stationary point ω 0 {\displaystyle \omega _{0}} at which f ( ω 0 ) = 0. {\displaystyle f'(\omega _{0})=0.} . This method of evaluation of Z {\displaystyle Z} around the saddle point ω 0 {\displaystyle \omega _{0}} is known as the method of steepest descent. One then obtains

Z = e f ( ω 0 ) 2 π f ( ω 0 ) . {\displaystyle Z={\frac {e^{f(\omega _{0})}}{\sqrt {2\pi f''(\omega _{0})}}}.}

We have f ( ω 0 ) = 0 {\displaystyle f'(\omega _{0})=0} and hence

( N + 1 ) = i g i ( ω 0 z i ) 1 ± 1 {\displaystyle (N+1)=\sum _{i}{\frac {g_{i}}{(\omega _{0}z_{i})^{-1}\pm 1}}}

(the +1 being negligible since N {\displaystyle N} is large). We shall see in a moment that this last relation is simply the formula

N = i n i . {\displaystyle N=\sum _{i}n_{i}.}

We obtain the mean occupation number ( n i ) a v {\displaystyle (n_{i})_{av}} by evaluating

( n j ) a v = z j d d z j ln Z = g j ( ω 0 z j ) 1 ± 1 = g j e ( ε j μ ) / k T ± 1 , e μ / k T = ω 0 . {\displaystyle (n_{j})_{av}=z_{j}{\frac {d}{dz_{j}}}\ln Z={\frac {g_{j}}{(\omega _{0}z_{j})^{-1}\pm 1}}={\frac {g_{j}}{e^{(\varepsilon _{j}-\mu )/kT}\pm 1}},\quad e^{\mu /kT}=\omega _{0}.}

This expression gives the mean number of elements of the total of N {\displaystyle N} in the volume V {\displaystyle V} which occupy at temperature T {\displaystyle T} the 1-particle level ε j {\displaystyle \varepsilon _{j}} with degeneracy g j {\displaystyle g_{j}} (see e.g. a priori probability). For the relation to be reliable one should check that higher order contributions are initially decreasing in magnitude so that the expansion around the saddle point does indeed yield an asymptotic expansion.

References

  1. "Darwin–Fowler method". Encyclopedia of Mathematics. Retrieved 2018-09-27.
  2. ^ Darwin, C. G.; Fowler, R. H. (1922). "On the partition of energy". Phil. Mag. 44: 450–479, 823–842. doi:10.1080/14786440908565189.
  3. Schrödinger, E. (1952). Statistical Thermodynamics. Cambridge University Press.
  4. Fowler, R. H. (1952). Statistical Mechanics. Cambridge University Press.
  5. Fowler, R. H.; Guggenheim, E. (1960). Statistical Thermodynamics. Cambridge University Press.
  6. Huang, K. (1963). Statistical Mechanics. Wiley.
  7. Müller–Kirsten, H. J. W. (2013). Basics of Statistical Physics (2nd ed.). World Scientific. ISBN 978-981-4449-53-3.
  8. Dingle, R. B. (1973). Asymptotic Expansions: Their Derivation and Interpretation. Academic Press. pp. 267–271. ISBN 0-12-216550-0.

Further reading

Category: