Misplaced Pages

Renormalization group: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 09:56, 20 May 2006 editBluebot (talk | contribs)349,597 edits bulleting external links using AWB← Previous edit Latest revision as of 09:59, 4 December 2024 edit undoMiniHuygens (talk | contribs)1 edit Exact renormalization group equations 
(475 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{Short description|Method for using scale changes to understand physical theories such as quantum field theories}}
In ], '''''renormalization group (RG)''''' refers
{{Renormalization and regularization}}
to a set of techniques and concepts related to the change of physics
with the observation scale. It was initially devised within particle
physics (in the guise of the ] and the ]s), but nowadays its applications are extended to solid state
physics, fluid mechanics and even cosmology.


In ], the '''renormalization group''' ('''RG''') is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different ]. In ], it reflects the changes in the underlying force laws (codified in a ]) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the ].
==Kadanoff's blocking picture==


A change in scale is called a ]. The renormalization group is intimately related to ''scale invariance'' and ''conformal invariance'', symmetries in which a system appears the same at all scales (]).{{efn|Note that ] are a strict subset of ], in general, the latter including additional symmetry generators associated with ]s.}}
This section introduces pedagogically the picture of RG which may be
easiest to grasp: Kadanoff's blocks. It was devised by Leo P. Kadanoff
in 1966, when RG already had a long history behind it.


As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable ] which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.
Let us consider a 2D solid, a set of atoms in a perfect square array,
as depicted in the figure. Let us assume that atoms interact among
themselves only with their nearest neighbours, and that the system is
at a given temperature <math>T</math>. The strength of their
interaction is measured by a certain coupling constant <math>J</math>. The
physics of the system will be described by certain formula, say
<math>H(T,J)</math>.


For example, in ] (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the ] seen at large distances, and this change, or ''running'', in the value of the electric charge is determined by the renormalization group equation.
]


==History==<!--'History of renormalization group theory' redirects here-->
Now we proceed to divide the solid into '''blocks''' of <math>2\times
The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the ], ], and up to ].<ref>{{cite web |url=http://www.av8n.com/physics/scaling.htm |title=Introduction to Scaling Laws |website=av8n.com}}</ref> They became popular again at the end of the 19th&nbsp;century, perhaps the first example being the idea of enhanced ] of ], as a way to explain turbulence.
2</math> squares. Now we attempt to describe the system in terms of
'''block variables''', i.e.: some magnitudes which describe the
average behaviour of the block. Also, let us assume that, due to a
lucky coincidence, the physics of block variables is described by a
formula of the same kind, but with '''different''' values for
<math>T</math> and <math>J</math>: <math>H(T',J')</math>. (This isn't exactly true, of course, but it is often approximately true in practice, and that is good enough, to a first approximation)


The renormalization group was initially devised in particle physics, but nowadays its applications extend to ], ], ], and even ]. An early article<ref>{{cite journal |author1-link=Ernst Stueckelberg |last1=Stueckelberg |first1=E.C.G. |author2-link=André Petermann |first2=A. |last2=Petermann |year=1953 |url=https://www.e-periodica.ch/cntmng?pid=hpa-001:1953:26::894 |title=La renormalisation des constants dans la théorie de quanta |journal=Helv. Phys. Acta |volume=26 |pages=499–520 |language=FR}}</ref> by ] and ] in 1953 anticipates the idea in ]. Stueckelberg and Petermann opened the field conceptually. They noted that ] exhibits a ] of transformations which transfer quantities from the bare terms to the counter terms. They introduced a function ''h''(''e'') in ], which is now called the ] (see below).
Perhaps the initial problem was too hard to solve, since there were
too many atoms. Now, in the '''renormalized''' problem we have only
one fourth of them. But why should we stop now? Another iteration of
the same kind leads to <math>H(T'',J'')</math>, and only one sixteenth
of the atoms. We are increasing the '''observation scale''' with each
RG step.


===Beginnings===
Of course, the best idea is to iterate until there is only one very
] and ] restricted the idea to scale transformations in QED in 1954,<ref>{{cite journal |last=Gell-Mann |first=M. |author-link=Murray Gell-Mann |author2=Low, F. E. |author-link2=Francis E. Low |year=1954 |title=Quantum Electrodynamics at Small Distances |journal=Physical Review |volume=95 |issue=5 |pages=1300–1312 |doi=10.1103/PhysRev.95.1300 |bibcode=1954PhRv...95.1300G |url=https://authors.library.caltech.edu/60469/1/PhysRev.95.1300.pdf}}</ref> which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter ''g''(''μ'') at the energy scale ''μ'' is effectively given by the (one-dimensional translation) group equation
big block. Since the number of atoms in any real sample of material is
:<math>g(\mu)=G^{-1}\left(\left(\frac{\mu}{M}\right)^d G(g(M))\right)</math>
very large, this is more or less equivalent to finding the <i>long
or equivalently, <math>G\left(g(\mu)\right)= G(g(M))\left({\mu}/{M}\right)^d</math>, for some function ''G'' (unspecified—nowadays called ]'s scaling function) and a constant ''d'', in terms of the coupling ''g(M)'' at a reference scale ''M''.
term</i> behaviour of the RG transformation which took <math>(T,J)\to
(T',J')</math> and <math>(T',J')\to (T'',J'')</math>. Usually, when
iterated many times, this RG transformation leads to a certain number
of '''fixed points'''.


Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as ''μ'', and can vary to define the theory at any other scale:
Let us be more concrete and consider a ] system (e.g.: the
:<math>g(\kappa)=G^{-1}\left(\left(\frac{\kappa}{\mu}\right)^d G(g(\mu))\right) = G^{-1}\left(\left(\frac{\kappa}{M}\right)^d G(g(M))\right)</math>
]), in which the ''J'' coupling constant denotes the
The gist of the RG is this group property: as the scale ''μ'' varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings<ref>{{cite journal |last1=Curtright |first1=T.L. |author-link1=Thomas Curtright |last2=Zachos |first2=C.K. |date=March 2011 |title=Renormalization Group Functional Equations |journal=Physical Review D |volume=83 |issue=6 |pages=065019 |doi=10.1103/PhysRevD.83.065019 |bibcode=2011PhRvD..83f5019C |arxiv=1010.5174|s2cid=119302913 }}</ref> in the mathematical sense (]).
trend of neighbour ]s to be parallel. Physics is dominated by
the tradeoff between the ordering ''J'' term and the disordering
effect of temperature. For many models of this kind there are three
fixed points:


On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function {{math|''ψ''(''g'') {{=}} ''G'' ''d''/(∂''G''/∂''g'')}} of the coupling parameter ''g'', which they introduced. Like the function ''h''(''e'') of Stueckelberg and Petermann, their function determines the differential change of the coupling ''g''(''μ'') with respect to a small change in energy scale ''μ'' through a differential equation, the ''renormalization group equation'':
(a) <math>T=0</math> and <math>J\to\infty</math>. This means that, at
:<math> \displaystyle\frac{\partial g}{\partial \ln\mu} = \psi(g) = \beta(g) </math>
the largest size, temperature becomes unimportant, i.e.: the
The modern name is also indicated, the ], introduced by ] and ] in 1970.<ref name=CS/> Since it is a mere function of ''g'', integration in ''g'' of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function ''G'' in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40&nbsp;years later at the ] accelerator experiments: the ] of QED was measured <ref>{{Cite journal|last=Fritzsch|first=Harald|date=2002|title=Fundamental Constants at High Energy|journal=Fortschritte der Physik|volume=50|issue=5–7|pages=518–524|doi=10.1002/1521-3978(200205)50:5/7<518::AID-PROP518>3.0.CO;2-F|arxiv=hep-ph/0201198|bibcode=2002ForPh..50..518F |s2cid=18481179 }}</ref> to be about {{frac|1|127}} at energies close to 200&nbsp;GeV, as opposed to the standard low-energy physics value of {{frac|1|137}}&nbsp;.{{efn|Early applications to ] are discussed in the influential 1959 book ''The Theory of Quantized Fields'' by ] and ].<ref>{{cite book |author1-link=Nikolay Bogolyubov |first1=N.N. |last1=Bogoliubov |author2-link=Dmitry Shirkov |first2=D.V. |last2=Shirkov |year=1959 |title=The Theory of Quantized Fields |place=New York, NY |publisher=Interscience}}</ref>}}
disordering factor vanishes. Thus, in large scales, the system appears
to be ordered. We are in a ferromagnetic phase.


=== Deeper understanding ===
(b) <math>T\to\infty</math> and <math>J\to 0</math>. Exactly the
The renormalization group emerges from the ] of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory.{{efn|Although note that the RG exists independently of the infinities.}} This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by ], ] and ], who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is ] by an ultra-large ], Λ.{{efn|The regulator parameter Λ could ultimately be taken to be infinite – infinities reflect the pileup of contributions from an infinity of degrees of freedom at infinitely high energy scales.}}
opposite, temperature has its victory, and the system is disordered at
large scales.


The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in '' g'' is provided by the above RG equation given ψ(''g''), the self-similarity is expressed by the fact that ψ(''g'') depends explicitly only upon the parameter(s) of the theory, and not upon the scale ''μ''. Consequently, the above renormalization group equation may be solved for (''G'' and thus) ''g''(''μ'').
(c) A nontrivial point between them, <math>T=T_c</math> and
<math>J=J_c</math>. In this point, changing the scale does not change
the physics, because the system is in a ] state. It
corresponds to the Curie phase transition, and is also called a
].


A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional ''renormalizable'' theories, considers methods where widely different scales of lengths appear simultaneously. It came from ]: ]'s paper in 1966 proposed the "block-spin" renormalization group.<ref name="Kadanoff">{{cite journal |author-link=Leo P. Kadanoff |first=Leo P. |last=Kadanoff |year=1966 |title=Scaling laws for Ising models near <math>T_c</math> |journal=Physics Physique Fizika |volume=2 |issue=6 |page=263|doi=10.1103/PhysicsPhysiqueFizika.2.263 |doi-access=free }}</ref> The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
So, if we are given a certain material with given values of ''T''
and ''J'', all we have to do in order to find out the large scale
behaviour of the system is to iterate the pair until we find the
corresponding fixed point.


This approach covered the conceptual point and was given full computational substance in the extensive important contributions of ]. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the ], in 1975,<ref>{{cite journal |author-link=Kenneth G. Wilson |first=K.G. |last=Wilson |year=1975 |title=The renormalization group: Critical phenomena and the Kondo problem |journal=Rev. Mod. Phys. |volume=47 |issue=4 |page=773|doi=10.1103/RevModPhys.47.773 |bibcode=1975RvMP...47..773W }}</ref> as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and ] in 1971.<ref>{{Cite journal |last=Wilson |first=K.G. |author-link=Kenneth G. Wilson |title=Renormalization group and critical phenomena. I. Renormalization group and the Kadanoff scaling picture |doi=10.1103/PhysRevB.4.3174 |journal=Physical Review B |volume=4 |issue=9 |pages=3174–3183 |year=1971 |bibcode=1971PhRvB...4.3174W|doi-access=free }}</ref><ref>{{Cite journal |last=Wilson |first=K. |author-link=Kenneth G. Wilson |title=Renormalization group and critical phenomena. II. Phase-space cell analysis of critical behavior |doi=10.1103/PhysRevB.4.3184 |journal=Physical Review B |volume=4 |issue=9 |pages=3184–3205 |year=1971 |bibcode=1971PhRvB...4.3184W|doi-access=free }}</ref><ref>{{cite journal |last1=Wilson |first1=K.G. |author1-link=Kenneth G. Wilson |last2=Fisher |first2=M. |year=1972 |title=Critical exponents in 3.99 dimensions |journal=Physical Review Letters |volume=28 |issue=4 |page=240 |doi=10.1103/physrevlett.28.240 |bibcode=1972PhRvL..28..240W }}</ref> He was awarded the Nobel prize for these decisive contributions in 1982.<ref>{{cite web |url=https://www.nobelprize.org/uploads/2018/06/wilson-lecture-2.pdf |title=Wilson's Nobel Prize address |website=NobelPrize.org |first=Kenneth G. |last=Wilson |author-link=Kenneth G. Wilson}}</ref>
==Elements of RG theory==


===Reformulation===
In more technical terms, let us assume that we have a theory described
Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970.<ref name=CS>{{cite journal |last=Callan |first=C.G. |year=1970 |title=Broken scale invariance in scalar field theory |doi=10.1103/PhysRevD.2.1541 |journal=Physical Review D |volume=2 |issue=8 |pages=1541–1547 |bibcode=1970PhRvD...2.1541C}}</ref><ref>{{cite journal |last=Symanzik |first=K. |year=1970 |title=Small distance behaviour in field theory and power counting |doi=10.1007/BF01649434 |journal=Communications in Mathematical Physics |volume=18 |issue=3 |pages=227–246 |bibcode=1970CMaPh..18..227S|s2cid=76654566 |url=http://projecteuclid.org/euclid.cmp/1103842537 }}</ref> The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory.{{efn|Remarkably, the trace anomaly and the running coupling quantum mechanical procedures can themselves induce mass.}} Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the ].
by a certain function <math>Z</math> of the state variables
<math>\{s_i\}</math> and a certain set of coupling constants
<math>\{J_k\}</math>. This function may be a ],
an ], a ], etc. It must contain the
whole description of the physics of the system.


In 1973,<ref>{{cite journal |first1=D.J. |last1=Gross |first2=F. |last2=Wilczek |year=1973 |title=Ultraviolet behavior of non-Abelian gauge theories |journal=] |volume=30 |issue=26 |pages=1343–1346 |doi=10.1103/PhysRevLett.30.1343 |doi-access=free |bibcode=1973PhRvL..30.1343G}}</ref><ref>{{cite journal |first=H.D. |last=Politzer |year=1973 |title=Reliable perturbative results for strong interactions |journal=] |volume=30 |issue=26 |pages=1346–1349 |bibcode=1973PhRvL..30.1346P |doi=10.1103/PhysRevLett.30.1346 |doi-access=free}}</ref> it was discovered that a theory of interacting colored quarks, called ], had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of {{mvar|μ}} at which the coupling blows up (diverges). This special value is the ], ] and occurs at about 200&nbsp;MeV. Conversely, the coupling becomes weak at very high energies (]), and the quarks become observable as point-like particles, in ], as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.
Now we consider a certain blocking transformation of the state
variables <math>\{s_i\}\to \{\tilde s_i\}</math>,
the number of <math>\tilde s_i</math> must be lower than the number of
<math>s_i</math>. Now let us try to rewrite the <math>Z</math>
function ''only'' in terms of the <math>\tilde s_i</math>. If this is achievable by a
certain change in the parameters, <math>\{J_k\}\to
\{\tilde J_k\}</math>, then the theory is said to be
'''renormalizable'''.


Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems.{{efn|For strongly correlated systems, ] techniques are a better alternative.}}
For some reason, most fundamental theories of physics such as ], ] and ] interaction, but not gravity are exactly
renormalizable. Also, most theories in condensed matter physics are
approximately renormalizable, from ] to fluid
turbulence.


=== Conformal symmetry ===
The change in the parameters is implemented by a certain
Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a ''fixed point'' at which ''β''(''g'') = 0. In QCD, the fixed point occurs at short distances where ''g'' → 0 and is called a (]) ]. For heavy quarks, such as the ], the coupling to the mass-giving ] runs toward a fixed non-zero (non-trivial) ], first predicted by Pendleton and Ross (1981),<ref>{{cite journal |first1=Brian |last1=Pendleton |first2=Graham |last2=Ross |title=Mass and mixing angle predictions from infrared fixed points |journal=Physics Letters B |volume=98 |issue=4 |year=1981 |pages=291–294 |doi=10.1016/0370-2693(81)90017-4
<math>\beta</math>-function: <math>\{\tilde
|bibcode=1981PhLB...98..291P }}</ref> and ].<ref>{{cite journal |first=Christopher T. |last=Hill |author-link=C. T. Hill |title=Quark and lepton masses from renormalization group fixed points |journal=Physical Review D |volume=24 |issue=3 |year=1981 |pages=691–703 |doi=10.1103/PhysRevD.24.691|bibcode=1981PhRvD..24..691H }}</ref>
J_k\}=\beta(\{ J_k \})</math>, which is said to induce a
The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.{{citation needed|date=December 2022}}
'''renormalization flow''' (or RG flow) on the
<math>J</math>-space. The values of <math>J</math> under the flow are
called '''running coupling constants'''.


In ], conformal invariance of the string world-sheet is a fundamental symmetry: ''β'' = 0 is a requirement. Here, ''β'' is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of ] on the geometry. The RG is of fundamental importance to string theory and theories of ].
As it was stated in the previous section, the most important
information in the RG flow are its '''fixed points'''. The possible
macroscopic states of the system, at a large scale, are given by this
set of fixed points.


It is also the modern key idea underlying ] in condensed matter physics.<ref>{{Cite journal |last=Shankar |first=R. |doi=10.1103/RevModPhys.66.129 |title=Renormalization-group approach to interacting fermions |journal=Reviews of Modern Physics |volume=66 |issue=1 |pages=129–192 |year=1994 |arxiv=cond-mat/9307009 |bibcode=1994RvMP...66..129S}} (For nonsubscribers see {{cite journal |title= Renormalization-group approach to interacting fermions|arxiv = cond-mat/9307009|doi = 10.1103/RevModPhys.66.129|last = Shankar|first = R. |journal = Reviews of Modern Physics|year = 1993|volume = 66|issue = 1|pages = 129–192|bibcode = 1994RvMP...66..129S}}.)</ref> Indeed, the RG has become one of the most important tools of modern physics.<ref>{{cite journal |first1=L.Ts. |last1=Adzhemyan |first2=T.L. |last2=Kim |first3=M.V. |last3=Kompaniets |first4=V.K. |last4=Sazonov |title=Renormalization group in the infinite-dimensional turbulence: determination of the RG-functions without renormalization constants |journal=Nanosystems: Physics, Chemistry, Mathematics |date=August 2015 |volume=6 |issue=4 |page=461|doi=10.17586/2220-8054-2015-6-4-461-469 |doi-access=free }}</ref> It is often used in combination with the ].<ref name="CallawayPetronzio1984">{{cite journal |last1=Callaway |first1=David J.E. |last2=Petronzio |first2=Roberto |title=Determination of critical points and flow diagrams by Monte Carlo renormalization group methods |journal=Physics Letters B |volume=139 |issue=3 |year=1984 |pages=189–194 |issn=0370-2693 |doi=10.1016/0370-2693(84)91242-5 |bibcode=1984PhLB..139..189C |url=https://cds.cern.ch/record/149868}}</ref>
Since the RG transformations are '''lossy''' (i.e.: the number of
variables decreases - see as an example in a different context, ]), there need not be an inverse for a given RG
transformation. Thus, the renormalization group is, in practice, a
].


==Block spin==
==Relevant and irrelevant operators, universality classes==
This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by ] in 1966.<ref name=Kadanoff/>


Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.
Let us consider a certain observable <math>A</math> of a physical
system undergoing an RG transformation. The magnitude of the observable
as the scale of the system goes from small to large may be (a) always increasing, (b) always decreasing or (c) other. In the first case, the
observable is said to be a '''relevant''' observable; in the second,
'''irrelevant''' and in the third, '''marginal'''.


]
A relevant operator is needed to describe the macroscopic behaviour of
the system, but not an irrelevant observable. Marginal observables
always give trouble when deciding whether to take them into account or
not. A remarkable fact is that most observables are irrelevant,
i.e.: the macroscopic physics is dominated by only a few observables
in most systems. In other terms: microscopic physics contains
<math>\approx 10^{23}</math> variables, and macroscopic physics only a
few.


Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature {{mvar|T}}. The strength of their interaction is quantified by a certain ] {{mvar|J}}. The physics of the system will be described by a certain formula, say the Hamiltonian {{math|''H''(''T'', ''J'')}}.
Before the RG, there was an astonishing empirical fact to explain: the
coincidence of the ] (i.e.: the behaviour near a
]) in very different phenomena, such as
magnetic systems, superfluid transition (]), alloy physics... This was
called '''universality''' and is successfully explained by RG, just
showing that the differences between all those phenomena are related
to '''irrelevant observables'''.


Now proceed to divide the solid into '''blocks''' of 2×2&nbsp;squares; we attempt to describe the system in terms of '''block variables''', i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a ''formula of the same kind'', but with '''different''' values for {{mvar|T}} and {{mvar|J}} : {{math|''H''(''{{prime|T}}'', ''{{prime|J}}'')}}. (This isn't exactly true, in general, but it is often a good first approximation.)
Thus, many macroscopic phenomena may be grouped into a small set of
'''universality classes''', described by the set of relevant
observables.


Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the '''renormalized''' problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to {{math|''H''(''T"'',''J"'')}}, and only one sixteenth of the atoms. We are increasing the '''observation scale''' with each RG step.
==Momentum space RG==


Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the ''long range'' behaviour of the RG transformation which took {{math|(''T'',''J'') → (''{{prime|T}}'',''{{prime|J}}'')}} and {{math|(''{{prime|T}}'', ''{{prime|J}}'') → (''T"'', ''J"'')}}. Often, when iterated many times, this RG transformation leads to a certain number of '''fixed points'''.
RG, in practice, comes in two main flavours. The Kadanoff picture
explained above refers mainly to the so-called <b>real-space
RG</b>. '''Momentum-space RG''' on the other hand, has a longer history
despite its relative subtlety.{{fact}} It can be used for systems where the degrees of freedom can be cast in terms of the
] of a given field. The RG transformation proceeds
by ''integrating out'' a certain set of high momentum (high spatial frequency) modes. Since high spatial frequency is related to short length scales, the momentum-space RG results in an essentially similar coarse-graining effect as with real-space RG.


To be more concrete, consider a ] system (e.g., the ]), in which the {{mvar|J}} coupling denotes the trend of neighbour ]s to be aligned. The configuration of the system is the result of the tradeoff between the ordering {{mvar|J}} term and the disordering effect of temperature.
Momentum-space RG is usually performed on a ] expansion (i.e., approximation). The validity of such an expansion is predicated upon the true physics of our system being close to that of
a ] system. In this case, we may calculate observables by summing the leading terms in the expansion.
This approach has proved very successful for many theories, including most
of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.


For many models of this kind there are three fixed points:
As an example of the physical meaning of RG in particle physics we will
# {{math|1=''T'' = 0}} and {{math|''J'' → ∞}}. This means that, at the largest size, temperature becomes unimportant, i.e., the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ] phase.
give a short description of charge renormalization in quantum electrodynamics
# {{math|''T'' → ∞}} and {{math|''J'' → 0}}. Exactly the opposite; here, temperature dominates, and the system is disordered at large scales.
(QED). Let us suppose we have a point positive charge of a certain true
# A nontrivial point between them, {{math|1=''T'' = ''T''<sub>''c''</sub>}} and {{math|1=''J'' = ''J''<sub>''c''</sub>}}. In this point, changing the scale does not change the physics, because the system is in a ] state. It corresponds to the ] ], and is also called a ].
(or '''bare''') magnitude. The electromagnetic field around it has a certain
energy, and thus may produce some pairs of (e.g.) electrons-positrons, which will be annihilated very quickly. But in their short life, the electron will be attracted
by the charge, and the positron will be repelled. Since this happens continuously,
these pairs are effectively '''screening''' the charge from abroad. Therefore,
the measured strength of the charged will depend on how close to it our probes
may enter. We have a dependence of a certain coupling constant (the electric
charge) with distance.


So, if we are given a certain material with given values of {{mvar|T}} and {{mvar|J}}, all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.
Energy, momentum and length scales are related, according to
].
The higher the energy or momentum scale we may reach, the lower the length scale
we may probe. Therefore, the momentum-space RG practitioners sometimes claim to
''integrate out'' high momenta or high energy from their theories.


==Elementary theory<!--'RG flow' redirects here-->==
==History of the renormalization group==
In more technical terms, let us assume that we have a theory described by a certain function <math>Z</math> of the ] <math>\{s_i\}</math> and a certain set of coupling constants <math>\{J_k\}</math>. This function may be a ], an ], a ], etc. It must contain the whole description of the physics of the system.


Now we consider a certain blocking transformation of the state variables <math>\{s_i\}\to \{\tilde s_i\}</math>, the number of <math>\tilde s_i</math> must be lower than the number of <math>s_i</math>. Now let us try to rewrite the <math>Z</math> function ''only'' in terms of the <math>\tilde s_i</math>. If this is achievable by a certain change in the parameters, <math>\{J_k\}\to \{\tilde J_k\}</math>, then the theory is said to be '''renormalizable'''.
Of course, the idea of scale invariance is old and venerable in
physics. Scaling arguments were commonplace for the ],
] and up to ]. They became popular again
at the end of the 19th century, perhaps the first example being the
idea of enhanced ] of ], as a way to
explain turbulence.


Most fundamental theories of physics such as ], ] and ] interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from ] to fluid turbulence.
RG made its appearance in physics in very different guise. An
article by E.C.G. Stueckelberg and A. Peterman in 1953 and another one
by ] and F.E. Low in 1954 opened the field, but as a
mathematical trick to get rid of the infinities in quantum field
theory. As a pure technique, it obtained maturity with the book by
N.N. Bogoliubov and D.V. Shirkov in 1959. The RG term was inherited
from this time and, although most people agree that it is incorrect,
no other alternative has been proposed so far.


The change in the parameters is implemented by a certain beta function: <math>\{\tilde J_k\}=\beta(\{ J_k \})</math>, which is said to induce a '''renormalization group flow''' (or '''RG flow''') on the <math>J</math>-space. The values of <math>J</math> under the flow are called '''running couplings'''.
The technique was developed further by ], ] and ], who received the Nobel prize for their contributions to quantum electrodynamics. They devised the theory of mass and charge
renormalization.


As was stated in the previous section, the most important information in the RG flow are its '''fixed points'''. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit ], possessing what is called a ], as in quantum electrodynamics. For a {{mvar|φ}}<sup>4</sup> interaction, ] proved that this theory is indeed trivial, for space-time dimension {{mvar|D}} ≥ 5.<ref name="Aiz81">{{cite journal |last=Aizenman |first=M. |author-link=Michael Aizenman |year=1981 |title=Proof of the triviality of ''Φ{{su|b=d|p=4}}'' field theory and some mean-field features of Ising models for ''d'' > 4 |journal=] |volume=47 |issue=1 |pages=1–4 |doi=10.1103/PhysRevLett.47.1 |bibcode=1981PhRvL..47....1A }}</ref> For {{mvar|D}} = 4, the triviality has yet to be proven rigorously, but ] have provided strong evidence for this. This fact is important as ] can be used to bound or even ''predict'' parameters such as the ] mass in ] scenarios. Numerous fixed points appear in the study of ], but the nature of the quantum field theories associated with these remains an open question.<ref name="TrivPurs">{{cite journal |first=David J.E. |last=Callaway |year=1988 |title=Triviality Pursuit: Can elementary scalar particles exist? |journal=] |volume=167 |issue=5 |pages=241–320 |doi=10.1016/0370-1573(88)90008-7 |bibcode=1988PhR...167..241C |author-link=David J E Callaway}}</ref>
But real understanding of the physical meaning of the technique came
with ]'s paper in 1966. The new blocking idea reached
maturity with ]'s solution of the ] in 1974. He was awarded the Nobel prize of this contribution in 1982.
The old-style RG in particle physics was reformulated in 1970 in more
physical terms by C.G. Callan and K. Symanzik. In this field, momentum
space RG is a very mature tool, its only failure being the
non-renormalizability of gravity. Momentum space RG also became a
highly developed tool in solid state physics, but its success was
hindered by the extensive use of perturbation theory, which, as we
have stated before, prevented the theory from reaching success in
strongly correlated systems.


Since the RG transformations in such systems are '''lossy''' (i.e.: the number of variables decreases - see as an example in a different context, ]), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a ], as lossiness implies that there is no unique inverse for each element.
In order to study these strongly correlated systems,
] approaches are a better alternative. During the 80's some real space RG techniques were developed in this sense, being the most successful
the density matrix RG (DMRG), developed by S.R. White and R.M. Noack
in 1992.


==Relevant and irrelevant operators and universality classes==
==See also==
<!-- This section is linked from ] -->
{{See also|Universality class| Dangerously irrelevant operator|Phase transition#Critical exponents and universality classes|Scale invariance#CFT description}}


Consider a certain observable {{mvar|A}} of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:
* ] is the main technique associated to
{| style="margin-left:3em;"
momentum-space RG.
| <small>'''''If its magnitude''''' ... </small>
* ] is the most successful variational
| <small>'''''then the observable is''''' ...</small>
real-space RG technique up to date.
|-
* ]
| always increases
| '''relevant'''
|-
| always decreases
| '''irrelevant'''
|-
| other
| '''marginal'''
|}
A ''relevant'' observable is needed to describe the macroscopic behaviour of the system; ''irrelevant'' observables are not needed. ''Marginal'' observables may or may not need to be taken into account. A remarkable broad fact is that ''most observables are irrelevant'', i.e., ''the macroscopic physics is dominated by only a few observables in most systems''.

As an example, in microscopic physics, to describe a system consisting of a ] of carbon-12 atoms we need of the order of 10{{sup|23}} (the ]) variables, while to describe it as a macroscopic system (12&nbsp;grams of carbon-12) we only need a few.

Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the ] (i.e., the exponents of the reduced-temperature dependence of several quantities near a ]) in very disparate phenomena, such as magnetic systems, superfluid transition (]), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition ''depend only on a small number of variables'', such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.

This coincidence of critical exponents for ostensibly quite different physical systems, called ], is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by ''irrelevant observables'', while the ''relevant observables'' are shared in common. Hence many macroscopic phenomena may be grouped into a small set of ''']es''', specified by the shared sets of relevant observables.{{efn|A superb technical exposition by ] (2010) is the classic article {{cite journal |title=Critical Phenomena: Field theoretical approach |journal=Scholarpedia |year=2010 |doi=10.4249/scholarpedia.8346 |last1=Zinn-Justin |first1=Jean |volume=5 |issue=5 |pages=8346 |bibcode=2010SchpJ...5.8346Z |doi-access=free}}. For example, for Ising-like systems with a <math>\mathbb{Z}_2</math> symmetry or, more generally, for models with an O(N) symmetry, the Gaussian (free) fixed point is long-distance stable above space dimension four, marginally stable in dimension four, and unstable below dimension four. See ].}}

==Momentum space==
Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called '''real-space RG'''.

'''Momentum-space RG''' on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the ] of a given field. The RG transformation proceeds by ''integrating out'' a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.

Momentum-space RG is usually performed on a ] expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a ] system. In this case, one may calculate observables by summing the leading terms in the expansion.
This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.

As an example of the physical meaning of RG in particle physics, consider an overview of ''charge renormalization'' in ] (QED). Suppose we have a point positive charge of a certain true (or '''bare''') magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a ''dependence of a certain coupling constant (here, the electric charge) with distance scale''.

Momentum and length scales are related inversely, according to the ]: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to ''integrate out'' high momenta or high energy from their theories.

==Exact renormalization group equations==
An '''exact renormalization group equation''' ('''ERGE''') is one that takes ] couplings into account. There are several formulations.

The '''Wilson ERGE''' is the simplest conceptually, but is practically impossible to implement. ] into ] after ] into ]. Insist upon a hard momentum ], {{math|''p''<sup>2</sup> ≤ Λ<sup>2</sup>}} so that the only degrees of freedom are those with momenta less than {{mvar|Λ}}. The ] is
:<math>Z=\int_{p^2\leq \Lambda^2} \mathcal{D}\varphi \exp\left\right].</math>

For any positive Λ&prime; less than Λ, define ''S''<sub>Λ&prime;</sub> (a functional over field configurations {{mvar|φ}} whose Fourier transform has momentum support within {{math|''p''<sup>2</sup> ≤ Λ&prime;<sup>2</sup>}}) as
:<math>\exp\left(-S_{\Lambda'}\right)\ \stackrel{\mathrm{def}}{=}\ \int_{\Lambda' \leq p \leq \Lambda} \mathcal{D}\varphi \exp\left\right].</math>

If {{math|''S''<sub>Λ</sub>}} depends only on {{Mvar|&varphi;}} and not on derivatives of {{Mvar|&varphi;}}, this may be rewritten as

<math display=block>\exp\left(-S_{\Lambda'}\right)\ \stackrel{\mathrm{def}}{=}\ \prod_{\Lambda' \leq p \leq \Lambda}\int d\varphi(p) \exp\left\right],</math>

in which it becomes clear that, since only functions ''&phi;'' with support between {{mvar|Λ'}} and {{mvar|Λ}} are integrated over, the left hand side may still depend on {{Math|''&varphi;''}} with support outside that range. Obviously,
:<math>Z=\int_{p^2\leq {\Lambda'}^2}\mathcal{D}\varphi \exp\left\right].</math>

In fact, this transformation is ]. If you compute {{math|''S''<sub>{{prime|Λ}}</sub>}} from {{math|''S''<sub>Λ</sub>}} and then compute {{math|''S<sub>{{prime|Λ}}{{prime}}</sub>''}} from {{math|''S''<sub>{{prime|Λ}}{{prime}}</sub>}}, this gives you the same Wilsonian action as computing ''S''<sub>Λ&Prime;</sub> directly from ''S''<sub>Λ</sub>.

The '''Polchinski ERGE''' involves a ] UV ] ]. Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than {{mvar|Λ}} heavily. The smoothness of the cutoff, however, allows us to derive a functional ] in the cutoff scale {{mvar|Λ}}. As in Wilson's approach, we have a different action functional for each cutoff energy scale {{mvar|Λ}}. Each of these actions are supposed to describe exactly the same model which means that their ]s have to match exactly.

In other words, (for a real scalar field; generalizations to other fields are obvious),
:<math>Z_\Lambda=\int \mathcal{D}\varphi \exp\left(-S_\Lambda+J\cdot \varphi\right)=\int \mathcal{D}\varphi \exp\left(-\tfrac{1}{2}\varphi\cdot R_\Lambda \cdot \varphi-S_{\operatorname{int}\Lambda}+J\cdot\varphi\right)</math>

and ''Z''<sub>Λ</sub> is really independent of {{mvar|Λ}}! We have used the condensed ] here. We have also split the bare action ''S''<sub>Λ</sub> into a quadratic kinetic part and an interacting part ''S''<sub>int Λ</sub>. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any ], it most certainly will. This can be somewhat reduced by introducing field rescalings. R<sub>Λ</sub> is a function of the momentum p and the second term in the exponent is
:<math>\frac{1}{2}\int \frac{d^dp}{(2\pi)^d}\tilde{\varphi}^*(p)R_\Lambda(p)\tilde{\varphi}(p)</math>
when expanded.

When <math>p \ll \Lambda</math>, {{math|''R''<sub>Λ</sub>(''p'')/''p''<sup>2</sup>}} is essentially 1. When <math>p \gg \Lambda</math>, {{math|''R''<sub>Λ</sub>(''p'')/''p''<sup>2</sup>}} becomes very very huge and approaches infinity. {{math|''R''<sub>Λ</sub>(''p'')/''p''<sup>2</sup>}} is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff {{mvar|Λ}} unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.

The condition that
:<math>\frac{d}{d\Lambda}Z_\Lambda=0</math>
can be satisfied by (but not only by)
:<math>\frac{d}{d\Lambda}S_{\operatorname{int}\Lambda}=\frac{1}{2}\frac{\delta S_{\operatorname{int}\Lambda}}{\delta \varphi}\cdot \left(\frac{d}{d\Lambda}R_\Lambda^{-1}\right)\cdot \frac{\delta S_{\operatorname{int}\Lambda}}{\delta \varphi}-\frac{1}{2}\operatorname{Tr}\left.</math>

] claimed without proof that this ERGE is not correct ]ly.<ref>{{cite web |author-link=Jacques Distler |first=Jacques |last=Distler |url=http://golem.ph.utexas.edu/~distler/blog/archives/000648.html |title=000648.html |website=golem.ph.utexas.edu}}</ref>

The '''effective average action ERGE''' involves a smooth IR regulator cutoff.
The idea is to take all fluctuations right up to an IR scale {{mvar|k}} into account. The '''effective average action''' will be accurate for fluctuations with momenta larger than {{mvar|k}}. As the parameter {{mvar|k}} is lowered, the effective average action approaches the ] which includes all quantum and classical fluctuations. In contrast, for large {{mvar|k}} the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the ].

For a real ], one adds an IR cutoff
:<math>\frac{1}{2}\int \frac{d^dp}{(2\pi)^d} \tilde{\varphi}^*(p)R_k(p)\tilde{\varphi}(p)</math>
to the ] {{mvar|S}}, where ''R''<sub>''k''</sub> is a function of both {{mvar|k}} and {{mvar|p}} such that for
<math>p \gg k</math>, R<sub>k</sub>(p) is very tiny and approaches 0 and for <math>p \ll k</math>, <math>R_k(p)\gtrsim k^2</math>. ''R''<sub>''k''</sub> is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.

One can use the condensed ]
:<math>\frac{1}{2} \varphi\cdot R_k \cdot \varphi</math>
for this IR regulator.

So,
:<math>\exp\left(W_k\right)=Z_k=\int \mathcal{D}\varphi \exp\left(-S-\frac{1}{2}\varphi \cdot R_k \cdot \varphi +J\cdot\varphi\right)</math>
where {{mvar|J}} is the ]. The ] of ''W''<sub>''k''</sub> ordinarily gives the ]. However, the action that we started off with is really ''S''&nbsp;+&nbsp;1/2 ''φ⋅R''<sub>''k''</sub>⋅''φ'' and so, to get the effective average action, we subtract off 1/2&nbsp;''φ''⋅''R''<sub>''k''</sub>⋅''φ''. In other words,
:<math>\varphi=\frac{\delta W_k}{\delta J}</math>
can be inverted to give ''J''<sub>''k''</sub> and we define the effective average action Γ<sub>''k''</sub> as
:<math>\Gamma_k\ \stackrel{\mathrm{def}}{=}\ \left(-W \left\right] + J_k\cdot\varphi\right)-\tfrac{1}{2}\varphi\cdot R_k\cdot \varphi.</math>

Hence,
:<math>\begin{align}
\frac{d}{dk}\Gamma_k &=-\frac{d}{dk}W_k]-\frac{\delta W_k}{\delta J}\cdot\frac{d}{dk}J_k+\frac{d}{dk}J_k\cdot \varphi-\tfrac{1}{2}\varphi\cdot \frac{d}{dk}R_k \cdot \varphi \\
&=-\frac{d}{dk}W_k]-\tfrac{1}{2}\varphi\cdot \frac{d}{dk}R_k \cdot \varphi \\
&=\tfrac{1}{2}\left\langle\varphi \cdot \frac{d}{dk}R_k \cdot \varphi\right\rangle_{J_k;k}-\tfrac{1}{2} \varphi\cdot \frac{d}{dk}R_k \cdot \varphi \\
&=\tfrac{1}{2}\operatorname{Tr}\left \\
&=\tfrac{1}{2}\operatorname{Tr}\left
\end{align}</math>

thus
:<math>\frac{d}{dk}\Gamma_k =\tfrac{1}{2}\operatorname{Tr}\left</math>
is the ERGE which is also known as the ] equation. As shown by Morris the effective action Γ<sub>k</sub> is in fact simply related to Polchinski's effective action S<sub>int</sub> via a Legendre transform relation.<ref>{{cite journal |last1=Morris |first1=Tim R. |title=The Exact renormalization group and approximate solutions|journal=Int. J. Mod. Phys. A |date=1994 |volume=9 |issue=14 |page=2411 |doi=10.1142/S0217751X94000972 |arxiv=hep-ph/9308265|bibcode=1994IJMPA...9.2411M |s2cid=15749927 }}</ref>

As there are infinitely many choices of {{mvar|R}}<sub>''k''</sub>, there are also infinitely many different interpolating ERGEs.
Generalization to other fields like spinorial fields is straightforward.

Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.

== Renormalization group improvement of the effective potential ==
The renormalization group can also be used to compute ] at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg <ref>{{Cite journal |last1=Coleman |first1=Sidney |last2=Weinberg |first2=Erick |date=1973-03-15 |title=Radiative Corrections as the Origin of Spontaneous Symmetry Breaking |url=https://link.aps.org/doi/10.1103/PhysRevD.7.1888 |journal=Physical Review D |language=en |volume=7 |issue=6 |pages=1888–1910 |doi=10.1103/PhysRevD.7.1888 |issn=0556-2821 |arxiv=hep-th/0507214 |bibcode=1973PhRvD...7.1888C |s2cid=6898114}}</ref> mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the <math>\varphi^4</math> model:

: <math>\left(\mu\frac{\partial}{\partial\mu} + \beta_\lambda\frac{\partial}{\partial\lambda} + \varphi\gamma_\varphi\frac{\partial}{\partial\varphi}\right) V_\text{eff} = 0.</math>

In order to determine the effective potential, it is useful to write <math>V_\text{eff}</math> as

: <math>V_\text{eff} = \frac{1}{4} \varphi^4 S_\text{eff}\big(\lambda, L(\varphi)\big),</math>

where <math>S_\text{eff}</math> is a ] in <math>L(\varphi) = \log \frac{\varphi^2}{\mu^2}</math>:

: <math>S_\text{eff} = A + BL + CL^2 + DL^3 + \cdots.</math>

Using the above ], it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference.<ref>{{Cite journal |last1=Souza |first1=Huan |last2=Bevilaqua |first2=L. Ibiapina |last3=Lehum |first3=A. C. |date=2020-08-05 |title=Renormalization group improvement of the effective potential in six dimensions |journal=Physical Review D |volume=102 |issue=4 |pages=045004 |doi=10.1103/PhysRevD.102.045004 |arxiv=2005.03973 |bibcode=2020PhRvD.102d5004S |doi-access=free}}</ref>

==See also==
{{div col begin |colwidth=15em}}
* ]
* ]
* ]
* ]
* ]
* ]
* ] * ]
* ]
* ]
* ]
* ]
* ]
* ]
{{div col end}}


==References== ==Remarks==
{{notelist|1}}
===Historical papers===


==Citations==
* E.C.G. Stueckelberg, A. Peterman (1953): Helv. Phys. Acta, '''26''', 499. M. Gell-Mann, F.E. Low (1954): Phys. Rev. '''95''', 5, 1300. The origin of renormalization group
{{Reflist|30em}}
* N.N. Bogoliubov, D.V. Shirkov (1959): The theory of quantized fields, Interscience. The first text-book on RG.
* L.P. Kadanoff (1966): Scaling laws for Ising models near <math>T_c</math>, Physica '''2''', 263. The new blocking picture.
* C.G. Callan (1970): Phys. Rev. D '''2''', 1541. K. Symanzik (1970): Comm. Math. Phys. '''18''', 227. The new view on momentum-space RG.
* K.G. Wilson (1975): The renormalization group: critical phenomena and the Kondo problem, Rev. Mod. Phys. '''47''', 4, 773. The main success of the new picture.
* S.R. White (1992): Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. '''69''', 2863. The most successful variational RG method.


==References==
===Didactical reviews===
===Historical references===
* {{cite journal |author-link=Michael Fisher |first=Michael |last=Fisher |year=1974 |title=The renormalization group in the theory of critical behavior |journal=Rev. Mod. Phys. |volume=46 |issue=4 |page=597|doi=10.1103/RevModPhys.46.597 |bibcode=1974RvMP...46..597F }}


===Pedagogical and historical reviews===
* N. Goldenfeld (1993): Lectures on phase transitions and the renormalization group. Addison-Wesley.
* {{cite journal |first=S.R. |last=White |year=1992 |title=Density matrix formulation for quantum renormalization groups |journal=Phys. Rev. Lett. |volume=69 |issue=19 |pages=2863–2866 |doi=10.1103/PhysRevLett.69.2863 |pmid=10046608 |bibcode=1992PhRvL..69.2863W }} The most successful variational RG method.
* D.V. Shirkov (1999): Evolution of the Bogoliubov Renormalization Group. . A mathematical introduction and historical overview with a stress on group theory and the application in high-energy physics.
* {{cite book |first=N. |last=Goldenfeld |year=1993 |title=Lectures on phase transitions and the renormalization group |publisher=Addison-Wesley}}
* B. Delamotte (2004): A hint of renormalization. . A pedestrian introduction to renormalization and the renormalization group. For non subscribers see
* {{cite journal |author-link=Dmitry Shirkov |first=Dmitry V. |last=Shirkov |year=1999 |title=Evolution of the Bogoliubov Renormalization Group |arxiv=hep-th/9909024|bibcode=1999hep.th....9024S }} A mathematical introduction and historical overview with a stress on group theory and the application in high-energy physics.
* H.J. Maris, L.P. Kadanoff (1978): Teaching the renormalization group. . A pedestrian introduction to the renormalization group as applied in condensed matter physics.
* {{cite journal |first=B. |last=Delamotte |title=A hint of renormalization |url=http://scitation.aip.org/journals/doc/AJPIAS-ft/vol_72/iss_2/170_1.html |journal=American Journal of Physics |volume=72 |issue=2 |pages=170–184 |date=February 2004 |arxiv=hep-th/0212049|doi=10.1119/1.1624112 |bibcode=2004AmJPh..72..170D |s2cid=2506712 }} A pedestrian introduction to renormalization and the renormalization group.
* {{cite journal |first1=H.J. |last1=Maris |first2=L.P. |last2=Kadanoff |title=Teaching the renormalization group |journal=American Journal of Physics |date=June 1978 |volume=46 |issue=6 |pages=652–657 |doi=10.1119/1.11224|bibcode=1978AmJPh..46..652M |s2cid=123119591 }} A pedestrian introduction to the renormalization group as applied in condensed matter physics.
* {{cite journal |first=K. |last=Huang |year=2013 |title=A Critical History of Renormalization |journal=International Journal of Modern Physics A |volume=28 |issue=29 |pages=1330050 |arxiv=1310.5533|doi=10.1142/S0217751X13300500 |bibcode=2013IJMPA..2830050H }}
*{{cite magazine |last=Shirkov |first=D.V. |date=2001-08-31 |url=http://cerncourier.com/cws/article/cern/28487 |title=Fifty years of the renormalization group |magazine=CERN Courier |access-date=2008-11-12 |df=dmy-all}}
*{{cite journal |last1=Bagnuls |first1=C. |last2=Bervillier |first2=C. |title=Exact renormalization group equations: an introductory review |journal=Physics Reports |volume=348 |issue=1–2 |pages=91–157 |year=2001 |doi=10.1016/S0370-1573(00)00137-X |arxiv=hep-th/0002034 |bibcode=2001PhR...348...91B|s2cid=18274894 }}


===Books===
== External links ==
*]; ''Particle physics and introduction to field theory'', Harwood academic publishers, 1981, {{ISBN|3-7186-0033-1}}. Contains a Concise, simple, and trenchant summary of the group structure, in whose discovery he was also involved, as acknowledged in Gell-Mann and Low's paper.
*
*L. Ts. Adzhemyan, N. V. Antonov and A. N. Vasiliev; ''The Field Theoretic Renormalization Group in Fully Developed Turbulence''; Gordon and Breach, 1999. {{ISBN|90-5699-145-0}}.
*Vasil'ev, A. N.; ''The field theoretic renormalization group in critical behavior theory and stochastic dynamics''; Chapman & Hall/CRC, 2004. {{ISBN|9780415310024}} (Self-contained treatment of renormalization group applications with complete computations);
*] (2002). ''Quantum field theory and critical phenomena'', Oxford, Clarendon Press (2002), {{ISBN|0-19-850923-5}} (an exceptionally solid and thorough treatise on both topics);
*]: ''Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories'', in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on ''Quantum Field Theory: Perspective and Prospective'', June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375-388 (1999) . Full text available in .
*] and Schulte Frohlinde, V; ''Critical Properties of {{mvar|φ}}<sup>4</sup>-Theories'', ; Paperback {{ISBN|981-02-4658-7}}''. Full text available in .


{{Industrial and applied mathematics}}

]
] ]
] ]
] ]
] ]

]
]
]
]

Latest revision as of 09:59, 4 December 2024

Method for using scale changes to understand physical theories such as quantum field theories
Renormalization and regularization
RenormalizationRenormalization group
On-shell scheme
Minimal subtraction scheme
RegularizationDimensional regularization
Pauli–Villars regularization
Lattice regularization
Zeta function regularization
Causal perturbation theory
Hadamard regularization
Point-splitting regularization

In theoretical physics, the renormalization group (RG) is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws (codified in a quantum field theory) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.

A change in scale is called a scale transformation. The renormalization group is intimately related to scale invariance and conformal invariance, symmetries in which a system appears the same at all scales (self-similarity).

As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.

For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation.

History

The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the Pythagorean school, Euclid, and up to Galileo. They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds, as a way to explain turbulence.

The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics, fluid mechanics, physical cosmology, and even nanotechnology. An early article by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory. Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfer quantities from the bare terms to the counter terms. They introduced a function h(e) in quantum electrodynamics (QED), which is now called the beta function (see below).

Beginnings

Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954, which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter g(μ) at the energy scale μ is effectively given by the (one-dimensional translation) group equation

g ( μ ) = G 1 ( ( μ M ) d G ( g ( M ) ) ) {\displaystyle g(\mu )=G^{-1}\left(\left({\frac {\mu }{M}}\right)^{d}G(g(M))\right)}

or equivalently, G ( g ( μ ) ) = G ( g ( M ) ) ( μ / M ) d {\displaystyle G\left(g(\mu )\right)=G(g(M))\left({\mu }/{M}\right)^{d}} , for some function G (unspecified—nowadays called Wegner's scaling function) and a constant d, in terms of the coupling g(M) at a reference scale M.

Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as μ, and can vary to define the theory at any other scale:

g ( κ ) = G 1 ( ( κ μ ) d G ( g ( μ ) ) ) = G 1 ( ( κ M ) d G ( g ( M ) ) ) {\displaystyle g(\kappa )=G^{-1}\left(\left({\frac {\kappa }{\mu }}\right)^{d}G(g(\mu ))\right)=G^{-1}\left(\left({\frac {\kappa }{M}}\right)^{d}G(g(M))\right)}

The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings in the mathematical sense (Schröder's equation).

On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function ψ(g) = G d/(∂G/∂g) of the coupling parameter g, which they introduced. Like the function h(e) of Stueckelberg and Petermann, their function determines the differential change of the coupling g(μ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation:

g ln μ = ψ ( g ) = β ( g ) {\displaystyle \displaystyle {\frac {\partial g}{\partial \ln \mu }}=\psi (g)=\beta (g)}

The modern name is also indicated, the beta function, introduced by C. Callan and K. Symanzik in 1970. Since it is a mere function of g, integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured to be about 1⁄127 at energies close to 200 GeV, as opposed to the standard low-energy physics value of 1⁄137 .

Deeper understanding

The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory. This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ.

The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).

A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously. It came from condensed matter physics: Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.

This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1975, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.

Reformulation

Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970. The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.

In 1973, it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of μ at which the coupling blows up (diverges). This special value is the scale of the strong interactions, μ = ΛQCD and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.

Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems.

Conformal symmetry

Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a fixed point at which β(g) = 0. In QCD, the fixed point occurs at short distances where g → 0 and is called a (trivial) ultraviolet fixed point. For heavy quarks, such as the top quark, the coupling to the mass-giving Higgs boson runs toward a fixed non-zero (non-trivial) infrared fixed point, first predicted by Pendleton and Ross (1981), and C. T. Hill. The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.

In string theory, conformal invariance of the string world-sheet is a fundamental symmetry: β = 0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory and theories of grand unification.

It is also the modern key idea underlying critical phenomena in condensed matter physics. Indeed, the RG has become one of the most important tools of modern physics. It is often used in combination with the Monte Carlo method.

Block spin

This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by Leo P. Kadanoff in 1966.

Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.

Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature T. The strength of their interaction is quantified by a certain coupling J. The physics of the system will be described by a certain formula, say the Hamiltonian H(T, J).

Now proceed to divide the solid into blocks of 2×2 squares; we attempt to describe the system in terms of block variables, i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for T and J : H(T′, J′). (This isn't exactly true, in general, but it is often a good first approximation.)

Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to H(T",J"), and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.

Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behaviour of the RG transformation which took (T,J) → (T′,J′) and (T′, J′) → (T", J"). Often, when iterated many times, this RG transformation leads to a certain number of fixed points.

To be more concrete, consider a magnetic system (e.g., the Ising model), in which the J coupling denotes the trend of neighbour spins to be aligned. The configuration of the system is the result of the tradeoff between the ordering J term and the disordering effect of temperature.

For many models of this kind there are three fixed points:

  1. T = 0 and J → ∞. This means that, at the largest size, temperature becomes unimportant, i.e., the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ferromagnetic phase.
  2. T → ∞ and J → 0. Exactly the opposite; here, temperature dominates, and the system is disordered at large scales.
  3. A nontrivial point between them, T = Tc and J = Jc. In this point, changing the scale does not change the physics, because the system is in a fractal state. It corresponds to the Curie phase transition, and is also called a critical point.

So, if we are given a certain material with given values of T and J, all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.

Elementary theory

In more technical terms, let us assume that we have a theory described by a certain function Z {\displaystyle Z} of the state variables { s i } {\displaystyle \{s_{i}\}} and a certain set of coupling constants { J k } {\displaystyle \{J_{k}\}} . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system.

Now we consider a certain blocking transformation of the state variables { s i } { s ~ i } {\displaystyle \{s_{i}\}\to \{{\tilde {s}}_{i}\}} , the number of s ~ i {\displaystyle {\tilde {s}}_{i}} must be lower than the number of s i {\displaystyle s_{i}} . Now let us try to rewrite the Z {\displaystyle Z} function only in terms of the s ~ i {\displaystyle {\tilde {s}}_{i}} . If this is achievable by a certain change in the parameters, { J k } { J ~ k } {\displaystyle \{J_{k}\}\to \{{\tilde {J}}_{k}\}} , then the theory is said to be renormalizable.

Most fundamental theories of physics such as quantum electrodynamics, quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.

The change in the parameters is implemented by a certain beta function: { J ~ k } = β ( { J k } ) {\displaystyle \{{\tilde {J}}_{k}\}=\beta (\{J_{k}\})} , which is said to induce a renormalization group flow (or RG flow) on the J {\displaystyle J} -space. The values of J {\displaystyle J} under the flow are called running couplings.

As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit quantum triviality, possessing what is called a Landau pole, as in quantum electrodynamics. For a φ interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension D ≥ 5. For D = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.

Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup, as lossiness implies that there is no unique inverse for each element.

Relevant and irrelevant operators and universality classes

See also: Universality class, Dangerously irrelevant operator, Phase transition § Critical exponents and universality classes, and Scale invariance § CFT description

Consider a certain observable A of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:

If its magnitude ... then the observable is ...
always increases relevant
always decreases irrelevant
other marginal

A relevant observable is needed to describe the macroscopic behaviour of the system; irrelevant observables are not needed. Marginal observables may or may not need to be taken into account. A remarkable broad fact is that most observables are irrelevant, i.e., the macroscopic physics is dominated by only a few observables in most systems.

As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 10 (the Avogadro number) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few.

Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition) in very disparate phenomena, such as magnetic systems, superfluid transition (Lambda transition), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition depend only on a small number of variables, such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.

This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables, while the relevant observables are shared in common. Hence many macroscopic phenomena may be grouped into a small set of universality classes, specified by the shared sets of relevant observables.

Momentum space

Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called real-space RG.

Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.

Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion. This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.

As an example of the physical meaning of RG in particle physics, consider an overview of charge renormalization in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a dependence of a certain coupling constant (here, the electric charge) with distance scale.

Momentum and length scales are related inversely, according to the de Broglie relation: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to integrate out high momenta or high energy from their theories.

Exact renormalization group equations

An exact renormalization group equation (ERGE) is one that takes irrelevant couplings into account. There are several formulations.

The Wilson ERGE is the simplest conceptually, but is practically impossible to implement. Fourier transform into momentum space after Wick rotating into Euclidean space. Insist upon a hard momentum cutoff, p ≤ Λ so that the only degrees of freedom are those with momenta less than Λ. The partition function is

Z = p 2 Λ 2 D φ exp [ S Λ [ φ ] ] . {\displaystyle Z=\int _{p^{2}\leq \Lambda ^{2}}{\mathcal {D}}\varphi \exp \left\right].}

For any positive Λ′ less than Λ, define SΛ′ (a functional over field configurations φ whose Fourier transform has momentum support within p ≤ Λ′) as

exp ( S Λ [ φ ] )   = d e f   Λ p Λ D φ exp [ S Λ [ φ ] ] . {\displaystyle \exp \left(-S_{\Lambda '}\right)\ {\stackrel {\mathrm {def} }{=}}\ \int _{\Lambda '\leq p\leq \Lambda }{\mathcal {D}}\varphi \exp \left\right].}

If SΛ depends only on ϕ and not on derivatives of ϕ, this may be rewritten as

exp ( S Λ [ φ ] )   = d e f   Λ p Λ d φ ( p ) exp [ S Λ [ φ ( p ) ] ] , {\displaystyle \exp \left(-S_{\Lambda '}\right)\ {\stackrel {\mathrm {def} }{=}}\ \prod _{\Lambda '\leq p\leq \Lambda }\int d\varphi (p)\exp \left\right],}

in which it becomes clear that, since only functions φ with support between Λ' and Λ are integrated over, the left hand side may still depend on ϕ with support outside that range. Obviously,

Z = p 2 Λ 2 D φ exp [ S Λ [ φ ] ] . {\displaystyle Z=\int _{p^{2}\leq {\Lambda '}^{2}}{\mathcal {D}}\varphi \exp \left\right].}

In fact, this transformation is transitive. If you compute SΛ′ from SΛ and then compute SΛ′′ from SΛ′′, this gives you the same Wilsonian action as computing SΛ″ directly from SΛ.

The Polchinski ERGE involves a smooth UV regulator cutoff. Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than Λ heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale Λ. As in Wilson's approach, we have a different action functional for each cutoff energy scale Λ. Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.

In other words, (for a real scalar field; generalizations to other fields are obvious),

Z Λ [ J ] = D φ exp ( S Λ [ φ ] + J φ ) = D φ exp ( 1 2 φ R Λ φ S int Λ [ φ ] + J φ ) {\displaystyle Z_{\Lambda }=\int {\mathcal {D}}\varphi \exp \left(-S_{\Lambda }+J\cdot \varphi \right)=\int {\mathcal {D}}\varphi \exp \left(-{\tfrac {1}{2}}\varphi \cdot R_{\Lambda }\cdot \varphi -S_{\operatorname {int} \Lambda }+J\cdot \varphi \right)}

and ZΛ is really independent of Λ! We have used the condensed deWitt notation here. We have also split the bare action SΛ into a quadratic kinetic part and an interacting part Sint Λ. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any wave function renormalization, it most certainly will. This can be somewhat reduced by introducing field rescalings. RΛ is a function of the momentum p and the second term in the exponent is

1 2 d d p ( 2 π ) d φ ~ ( p ) R Λ ( p ) φ ~ ( p ) {\displaystyle {\frac {1}{2}}\int {\frac {d^{d}p}{(2\pi )^{d}}}{\tilde {\varphi }}^{*}(p)R_{\Lambda }(p){\tilde {\varphi }}(p)}

when expanded.

When p Λ {\displaystyle p\ll \Lambda } , RΛ(p)/p is essentially 1. When p Λ {\displaystyle p\gg \Lambda } , RΛ(p)/p becomes very very huge and approaches infinity. RΛ(p)/p is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff Λ unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.

The condition that

d d Λ Z Λ = 0 {\displaystyle {\frac {d}{d\Lambda }}Z_{\Lambda }=0}

can be satisfied by (but not only by)

d d Λ S int Λ = 1 2 δ S int Λ δ φ ( d d Λ R Λ 1 ) δ S int Λ δ φ 1 2 Tr [ δ 2 S int Λ δ φ δ φ R Λ 1 ] . {\displaystyle {\frac {d}{d\Lambda }}S_{\operatorname {int} \Lambda }={\frac {1}{2}}{\frac {\delta S_{\operatorname {int} \Lambda }}{\delta \varphi }}\cdot \left({\frac {d}{d\Lambda }}R_{\Lambda }^{-1}\right)\cdot {\frac {\delta S_{\operatorname {int} \Lambda }}{\delta \varphi }}-{\frac {1}{2}}\operatorname {Tr} \left.}

Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively.

The effective average action ERGE involves a smooth IR regulator cutoff. The idea is to take all fluctuations right up to an IR scale k into account. The effective average action will be accurate for fluctuations with momenta larger than k. As the parameter k is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large k the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action.

For a real scalar field, one adds an IR cutoff

1 2 d d p ( 2 π ) d φ ~ ( p ) R k ( p ) φ ~ ( p ) {\displaystyle {\frac {1}{2}}\int {\frac {d^{d}p}{(2\pi )^{d}}}{\tilde {\varphi }}^{*}(p)R_{k}(p){\tilde {\varphi }}(p)}

to the action S, where Rk is a function of both k and p such that for p k {\displaystyle p\gg k} , Rk(p) is very tiny and approaches 0 and for p k {\displaystyle p\ll k} , R k ( p ) k 2 {\displaystyle R_{k}(p)\gtrsim k^{2}} . Rk is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.

One can use the condensed deWitt notation

1 2 φ R k φ {\displaystyle {\frac {1}{2}}\varphi \cdot R_{k}\cdot \varphi }

for this IR regulator.

So,

exp ( W k [ J ] ) = Z k [ J ] = D φ exp ( S [ φ ] 1 2 φ R k φ + J φ ) {\displaystyle \exp \left(W_{k}\right)=Z_{k}=\int {\mathcal {D}}\varphi \exp \left(-S-{\frac {1}{2}}\varphi \cdot R_{k}\cdot \varphi +J\cdot \varphi \right)}

where J is the source field. The Legendre transform of Wk ordinarily gives the effective action. However, the action that we started off with is really S + 1/2 φ⋅Rkφ and so, to get the effective average action, we subtract off 1/2 φRkφ. In other words,

φ [ J ; k ] = δ W k δ J [ J ] {\displaystyle \varphi ={\frac {\delta W_{k}}{\delta J}}}

can be inverted to give Jk and we define the effective average action Γk as

Γ k [ φ ]   = d e f   ( W [ J k [ φ ] ] + J k [ φ ] φ ) 1 2 φ R k φ . {\displaystyle \Gamma _{k}\ {\stackrel {\mathrm {def} }{=}}\ \left(-W\left\right]+J_{k}\cdot \varphi \right)-{\tfrac {1}{2}}\varphi \cdot R_{k}\cdot \varphi .}

Hence,

d d k Γ k [ φ ] = d d k W k [ J k [ φ ] ] δ W k δ J d d k J k [ φ ] + d d k J k [ φ ] φ 1 2 φ d d k R k φ = d d k W k [ J k [ φ ] ] 1 2 φ d d k R k φ = 1 2 φ d d k R k φ J k [ φ ] ; k 1 2 φ d d k R k φ = 1 2 Tr [ ( δ J k δ φ ) 1 d d k R k ] = 1 2 Tr [ ( δ 2 Γ k δ φ δ φ + R k ) 1 d d k R k ] {\displaystyle {\begin{aligned}{\frac {d}{dk}}\Gamma _{k}&=-{\frac {d}{dk}}W_{k}]-{\frac {\delta W_{k}}{\delta J}}\cdot {\frac {d}{dk}}J_{k}+{\frac {d}{dk}}J_{k}\cdot \varphi -{\tfrac {1}{2}}\varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \\&=-{\frac {d}{dk}}W_{k}]-{\tfrac {1}{2}}\varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \\&={\tfrac {1}{2}}\left\langle \varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \right\rangle _{J_{k};k}-{\tfrac {1}{2}}\varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \\&={\tfrac {1}{2}}\operatorname {Tr} \left\\&={\tfrac {1}{2}}\operatorname {Tr} \left\end{aligned}}}

thus

d d k Γ k [ φ ] = 1 2 Tr [ ( δ 2 Γ k δ φ δ φ + R k ) 1 d d k R k ] {\displaystyle {\frac {d}{dk}}\Gamma _{k}={\tfrac {1}{2}}\operatorname {Tr} \left}

is the ERGE which is also known as the Wetterich equation. As shown by Morris the effective action Γk is in fact simply related to Polchinski's effective action Sint via a Legendre transform relation.

As there are infinitely many choices of Rk, there are also infinitely many different interpolating ERGEs. Generalization to other fields like spinorial fields is straightforward.

Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.

Renormalization group improvement of the effective potential

The renormalization group can also be used to compute effective potentials at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the φ 4 {\displaystyle \varphi ^{4}} model:

( μ μ + β λ λ + φ γ φ φ ) V eff = 0. {\displaystyle \left(\mu {\frac {\partial }{\partial \mu }}+\beta _{\lambda }{\frac {\partial }{\partial \lambda }}+\varphi \gamma _{\varphi }{\frac {\partial }{\partial \varphi }}\right)V_{\text{eff}}=0.}

In order to determine the effective potential, it is useful to write V eff {\displaystyle V_{\text{eff}}} as

V eff = 1 4 φ 4 S eff ( λ , L ( φ ) ) , {\displaystyle V_{\text{eff}}={\frac {1}{4}}\varphi ^{4}S_{\text{eff}}{\big (}\lambda ,L(\varphi ){\big )},}

where S eff {\displaystyle S_{\text{eff}}} is a power series in L ( φ ) = log φ 2 μ 2 {\displaystyle L(\varphi )=\log {\frac {\varphi ^{2}}{\mu ^{2}}}} :

S eff = A + B L + C L 2 + D L 3 + . {\displaystyle S_{\text{eff}}=A+BL+CL^{2}+DL^{3}+\cdots .}

Using the above ansatz, it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference.

See also

Remarks

  1. Note that scale transformations are a strict subset of conformal transformations, in general, the latter including additional symmetry generators associated with special conformal transformations.
  2. Early applications to quantum electrodynamics are discussed in the influential 1959 book The Theory of Quantized Fields by Nikolay Bogolyubov and Dmitry Shirkov.
  3. Although note that the RG exists independently of the infinities.
  4. The regulator parameter Λ could ultimately be taken to be infinite – infinities reflect the pileup of contributions from an infinity of degrees of freedom at infinitely high energy scales.
  5. Remarkably, the trace anomaly and the running coupling quantum mechanical procedures can themselves induce mass.
  6. For strongly correlated systems, variational techniques are a better alternative.
  7. A superb technical exposition by J. Zinn-Justin (2010) is the classic article Zinn-Justin, Jean (2010). "Critical Phenomena: Field theoretical approach". Scholarpedia. 5 (5): 8346. Bibcode:2010SchpJ...5.8346Z. doi:10.4249/scholarpedia.8346.. For example, for Ising-like systems with a Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry or, more generally, for models with an O(N) symmetry, the Gaussian (free) fixed point is long-distance stable above space dimension four, marginally stable in dimension four, and unstable below dimension four. See Quantum triviality.

Citations

  1. "Introduction to Scaling Laws". av8n.com.
  2. Stueckelberg, E.C.G.; Petermann, A. (1953). "La renormalisation des constants dans la théorie de quanta". Helv. Phys. Acta (in French). 26: 499–520.
  3. Gell-Mann, M.; Low, F. E. (1954). "Quantum Electrodynamics at Small Distances" (PDF). Physical Review. 95 (5): 1300–1312. Bibcode:1954PhRv...95.1300G. doi:10.1103/PhysRev.95.1300.
  4. Curtright, T.L.; Zachos, C.K. (March 2011). "Renormalization Group Functional Equations". Physical Review D. 83 (6): 065019. arXiv:1010.5174. Bibcode:2011PhRvD..83f5019C. doi:10.1103/PhysRevD.83.065019. S2CID 119302913.
  5. ^ Callan, C.G. (1970). "Broken scale invariance in scalar field theory". Physical Review D. 2 (8): 1541–1547. Bibcode:1970PhRvD...2.1541C. doi:10.1103/PhysRevD.2.1541.
  6. Fritzsch, Harald (2002). "Fundamental Constants at High Energy". Fortschritte der Physik. 50 (5–7): 518–524. arXiv:hep-ph/0201198. Bibcode:2002ForPh..50..518F. doi:10.1002/1521-3978(200205)50:5/7<518::AID-PROP518>3.0.CO;2-F. S2CID 18481179.
  7. Bogoliubov, N.N.; Shirkov, D.V. (1959). The Theory of Quantized Fields. New York, NY: Interscience.
  8. ^ Kadanoff, Leo P. (1966). "Scaling laws for Ising models near T c {\displaystyle T_{c}} ". Physics Physique Fizika. 2 (6): 263. doi:10.1103/PhysicsPhysiqueFizika.2.263.
  9. Wilson, K.G. (1975). "The renormalization group: Critical phenomena and the Kondo problem". Rev. Mod. Phys. 47 (4): 773. Bibcode:1975RvMP...47..773W. doi:10.1103/RevModPhys.47.773.
  10. Wilson, K.G. (1971). "Renormalization group and critical phenomena. I. Renormalization group and the Kadanoff scaling picture". Physical Review B. 4 (9): 3174–3183. Bibcode:1971PhRvB...4.3174W. doi:10.1103/PhysRevB.4.3174.
  11. Wilson, K. (1971). "Renormalization group and critical phenomena. II. Phase-space cell analysis of critical behavior". Physical Review B. 4 (9): 3184–3205. Bibcode:1971PhRvB...4.3184W. doi:10.1103/PhysRevB.4.3184.
  12. Wilson, K.G.; Fisher, M. (1972). "Critical exponents in 3.99 dimensions". Physical Review Letters. 28 (4): 240. Bibcode:1972PhRvL..28..240W. doi:10.1103/physrevlett.28.240.
  13. Wilson, Kenneth G. "Wilson's Nobel Prize address" (PDF). NobelPrize.org.
  14. Symanzik, K. (1970). "Small distance behaviour in field theory and power counting". Communications in Mathematical Physics. 18 (3): 227–246. Bibcode:1970CMaPh..18..227S. doi:10.1007/BF01649434. S2CID 76654566.
  15. Gross, D.J.; Wilczek, F. (1973). "Ultraviolet behavior of non-Abelian gauge theories". Physical Review Letters. 30 (26): 1343–1346. Bibcode:1973PhRvL..30.1343G. doi:10.1103/PhysRevLett.30.1343.
  16. Politzer, H.D. (1973). "Reliable perturbative results for strong interactions". Physical Review Letters. 30 (26): 1346–1349. Bibcode:1973PhRvL..30.1346P. doi:10.1103/PhysRevLett.30.1346.
  17. Pendleton, Brian; Ross, Graham (1981). "Mass and mixing angle predictions from infrared fixed points". Physics Letters B. 98 (4): 291–294. Bibcode:1981PhLB...98..291P. doi:10.1016/0370-2693(81)90017-4.
  18. Hill, Christopher T. (1981). "Quark and lepton masses from renormalization group fixed points". Physical Review D. 24 (3): 691–703. Bibcode:1981PhRvD..24..691H. doi:10.1103/PhysRevD.24.691.
  19. Shankar, R. (1994). "Renormalization-group approach to interacting fermions". Reviews of Modern Physics. 66 (1): 129–192. arXiv:cond-mat/9307009. Bibcode:1994RvMP...66..129S. doi:10.1103/RevModPhys.66.129. (For nonsubscribers see Shankar, R. (1993). "Renormalization-group approach to interacting fermions". Reviews of Modern Physics. 66 (1): 129–192. arXiv:cond-mat/9307009. Bibcode:1994RvMP...66..129S. doi:10.1103/RevModPhys.66.129..)
  20. Adzhemyan, L.Ts.; Kim, T.L.; Kompaniets, M.V.; Sazonov, V.K. (August 2015). "Renormalization group in the infinite-dimensional turbulence: determination of the RG-functions without renormalization constants". Nanosystems: Physics, Chemistry, Mathematics. 6 (4): 461. doi:10.17586/2220-8054-2015-6-4-461-469.
  21. Callaway, David J.E.; Petronzio, Roberto (1984). "Determination of critical points and flow diagrams by Monte Carlo renormalization group methods". Physics Letters B. 139 (3): 189–194. Bibcode:1984PhLB..139..189C. doi:10.1016/0370-2693(84)91242-5. ISSN 0370-2693.
  22. Aizenman, M. (1981). "Proof of the triviality of Φ
    d
    field theory and some mean-field features of Ising models for d > 4". Physical Review Letters. 47 (1): 1–4. Bibcode:1981PhRvL..47....1A. doi:10.1103/PhysRevLett.47.1.
  23. Callaway, David J.E. (1988). "Triviality Pursuit: Can elementary scalar particles exist?". Physics Reports. 167 (5): 241–320. Bibcode:1988PhR...167..241C. doi:10.1016/0370-1573(88)90008-7.
  24. Distler, Jacques. "000648.html". golem.ph.utexas.edu.
  25. Morris, Tim R. (1994). "The Exact renormalization group and approximate solutions". Int. J. Mod. Phys. A. 9 (14): 2411. arXiv:hep-ph/9308265. Bibcode:1994IJMPA...9.2411M. doi:10.1142/S0217751X94000972. S2CID 15749927.
  26. Coleman, Sidney; Weinberg, Erick (1973-03-15). "Radiative Corrections as the Origin of Spontaneous Symmetry Breaking". Physical Review D. 7 (6): 1888–1910. arXiv:hep-th/0507214. Bibcode:1973PhRvD...7.1888C. doi:10.1103/PhysRevD.7.1888. ISSN 0556-2821. S2CID 6898114.
  27. Souza, Huan; Bevilaqua, L. Ibiapina; Lehum, A. C. (2020-08-05). "Renormalization group improvement of the effective potential in six dimensions". Physical Review D. 102 (4): 045004. arXiv:2005.03973. Bibcode:2020PhRvD.102d5004S. doi:10.1103/PhysRevD.102.045004.

References

Historical references

Pedagogical and historical reviews

Books

  • T. D. Lee; Particle physics and introduction to field theory, Harwood academic publishers, 1981, ISBN 3-7186-0033-1. Contains a Concise, simple, and trenchant summary of the group structure, in whose discovery he was also involved, as acknowledged in Gell-Mann and Low's paper.
  • L. Ts. Adzhemyan, N. V. Antonov and A. N. Vasiliev; The Field Theoretic Renormalization Group in Fully Developed Turbulence; Gordon and Breach, 1999. ISBN 90-5699-145-0.
  • Vasil'ev, A. N.; The field theoretic renormalization group in critical behavior theory and stochastic dynamics; Chapman & Hall/CRC, 2004. ISBN 9780415310024 (Self-contained treatment of renormalization group applications with complete computations);
  • Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002), ISBN 0-19-850923-5 (an exceptionally solid and thorough treatise on both topics);
  • Zinn-Justin, Jean: Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375-388 (1999) . Full text available in PostScript.
  • Kleinert, H. and Schulte Frohlinde, V; Critical Properties of φ-Theories, World Scientific (Singapore, 2001); Paperback ISBN 981-02-4658-7. Full text available in PDF.
Industrial and applied mathematics
Computational
Mathematical software
Discrete
Analysis
Probability theory
Mathematical
physics
Algebraic structures
Decision sciences
Other applications
Related
Organizations
Categories: