Misplaced Pages

Lag operator

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Operator for offsetting time series elements"Backshift" redirects here. For the linguistic sense, see Sequence of tenses.
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (January 2011) (Learn how and when to remove this message)

In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example, given some time series

X = { X 1 , X 2 , } {\displaystyle X=\{X_{1},X_{2},\dots \}}

then

L X t = X t 1 {\displaystyle LX_{t}=X_{t-1}} for all t > 1 {\displaystyle t>1}

or similarly in terms of the backshift operator B: B X t = X t 1 {\displaystyle BX_{t}=X_{t-1}} for all t > 1 {\displaystyle t>1} . Equivalently, this definition can be represented as

X t = L X t + 1 {\displaystyle X_{t}=LX_{t+1}} for all t 1 {\displaystyle t\geq 1}

The lag operator (as well as backshift operator) can be raised to arbitrary integer powers so that

L 1 X t = X t + 1 {\displaystyle L^{-1}X_{t}=X_{t+1}}

and

L k X t = X t k . {\displaystyle L^{k}X_{t}=X_{t-k}.}

Lag polynomials

Polynomials of the lag operator can be used, and this is a common notation for ARMA (autoregressive moving average) models. For example,

ε t = X t i = 1 p φ i X t i = ( 1 i = 1 p φ i L i ) X t {\displaystyle \varepsilon _{t}=X_{t}-\sum _{i=1}^{p}\varphi _{i}X_{t-i}=\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)X_{t}}

specifies an AR(p) model.

A polynomial of lag operators is called a lag polynomial so that, for example, the ARMA model can be concisely specified as

φ ( L ) X t = θ ( L ) ε t {\displaystyle \varphi (L)X_{t}=\theta (L)\varepsilon _{t}}

where φ ( L ) {\displaystyle \varphi (L)} and θ ( L ) {\displaystyle \theta (L)} respectively represent the lag polynomials

φ ( L ) = 1 i = 1 p φ i L i {\displaystyle \varphi (L)=1-\sum _{i=1}^{p}\varphi _{i}L^{i}}

and

θ ( L ) = 1 + i = 1 q θ i L i . {\displaystyle \theta (L)=1+\sum _{i=1}^{q}\theta _{i}L^{i}.\,}

Polynomials of lag operators follow similar rules of multiplication and division as do numbers and polynomials of variables. For example,

X t = θ ( L ) φ ( L ) ε t , {\displaystyle X_{t}={\frac {\theta (L)}{\varphi (L)}}\varepsilon _{t},}

means the same thing as

φ ( L ) X t = θ ( L ) ε t . {\displaystyle \varphi (L)X_{t}=\theta (L)\varepsilon _{t}.}

As with polynomials of variables, a polynomial in the lag operator can be divided by another one using polynomial long division. In general dividing one such polynomial by another, when each has a finite order (highest exponent), results in an infinite-order polynomial.

An annihilator operator, denoted [   ] + {\displaystyle _{+}} , removes the entries of the polynomial with negative power (future values).

Note that φ ( 1 ) {\displaystyle \varphi \left(1\right)} denotes the sum of coefficients:

φ ( 1 ) = 1 i = 1 p φ i {\displaystyle \varphi \left(1\right)=1-\sum _{i=1}^{p}\varphi _{i}}

Difference operator

Main article: Finite difference

In time series analysis, the first difference operator  : Δ {\displaystyle \Delta }

Δ X t = X t X t 1 Δ X t = ( 1 L ) X t   . {\displaystyle {\begin{aligned}\Delta X_{t}&=X_{t}-X_{t-1}\\\Delta X_{t}&=(1-L)X_{t}~.\end{aligned}}}

Similarly, the second difference operator works as follows:

Δ ( Δ X t ) = Δ X t Δ X t 1 Δ 2 X t = ( 1 L ) Δ X t Δ 2 X t = ( 1 L ) ( 1 L ) X t Δ 2 X t = ( 1 L ) 2 X t   . {\displaystyle {\begin{aligned}\Delta (\Delta X_{t})&=\Delta X_{t}-\Delta X_{t-1}\\\Delta ^{2}X_{t}&=(1-L)\Delta X_{t}\\\Delta ^{2}X_{t}&=(1-L)(1-L)X_{t}\\\Delta ^{2}X_{t}&=(1-L)^{2}X_{t}~.\end{aligned}}}

The above approach generalises to the i-th difference operator Δ i X t = ( 1 L ) i X t   . {\displaystyle \Delta ^{i}X_{t}=(1-L)^{i}X_{t}\ .}

Conditional expectation

It is common in stochastic processes to care about the expected value of a variable given a previous information set. Let Ω t {\displaystyle \Omega _{t}} be all information that is common knowledge at time t (this is often subscripted below the expectation operator); then the expected value of the realisation of X, j time-steps in the future, can be written equivalently as:

E [ X t + j | Ω t ] = E t [ X t + j ] . {\displaystyle E=E_{t}.}

With these time-dependent conditional expectations, there is the need to distinguish between the backshift operator (B) that only adjusts the date of the forecasted variable and the Lag operator (L) that adjusts equally the date of the forecasted variable and the information set:

L n E t [ X t + j ] = E t n [ X t + j n ] , {\displaystyle L^{n}E_{t}=E_{t-n},}
B n E t [ X t + j ] = E t [ X t + j n ] . {\displaystyle B^{n}E_{t}=E_{t}.}

See also

References

Category: