Misplaced Pages

Memorylessness

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Waiting time property of certain probability distributions For use of the term in materials science, see hysteresis. For use of the term in stochastic processes and Markov chains, see Markov property.

In probability and statistics, memorylessness is a property of certain probability distributions. It describes situations where the time already spent waiting for an event does not affect how much longer the wait will be. To model memoryless situations accurately, we have to disregard the past state of the system – the probabilities remain unaffected by the history of the process.

Only two kinds of distributions are memoryless: geometric and exponential probability distributions.

Waiting time examples

With memory

Most phenomena are not memoryless, which means that observers will obtain information about them over time. For example, suppose that X is a random variable, the lifetime of a car engine, expressed in terms of "number of miles driven until the engine breaks down". It is clear, based on our intuition, that an engine which has already been driven for 300,000 miles will have a much lower X than would a second (equivalent) engine which has only been driven for 1,000 miles. Hence, this random variable would not have the memorylessness property.

Without memory

In contrast, let us examine a situation which would exhibit memorylessness. Imagine a long hallway, lined on one wall with thousands of safes. Each safe has a dial with 500 positions, and each has been assigned an opening position at random. Imagine that an eccentric person walks down the hallway, stopping once at each safe to make a single random attempt to open it. In this case, we might define random variable X as the lifetime of their search, expressed in terms of "number of attempts the person must make until they successfully open a safe". In this case, E will always be equal to the value of 500, regardless of how many attempts have already been made. Each new attempt has a (1/500) chance of succeeding, so the person is likely to open exactly one safe sometime in the next 500 attempts – but with each new failure they make no "progress" toward ultimately succeeding. Even if the safe-cracker has just failed 499 consecutive times (or 4,999 times), we expect to wait 500 more attempts until we observe the next success. If, instead, this person focused their attempts on a single safe, and "remembered" their previous attempts to open it, they would be guaranteed to open the safe after, at most, 500 attempts (and, in fact, at onset would only expect to need 250 attempts, not 500).

The universal law of radioactive decay, which describes the time until a given radioactive particle decays, is a real-life example of memorylessness. An often used (theoretical) example of memorylessness in queueing theory is the time a storekeeper must wait before the arrival of the next customer.

Discrete memorylessness

If a discrete random variable X {\displaystyle X} is memoryless, then it satisfies Pr ( X > m + n X > m ) = Pr ( X > n ) {\displaystyle \Pr(X>m+n\mid X>m)=\Pr(X>n)} where m {\displaystyle m} and n {\displaystyle n} are natural numbers. The equality is still true when {\displaystyle \geq } is substituted for > {\displaystyle >} on the left hand side of the equation.

The only discrete random variable that is memoryless is the geometric random variable taking values in N {\displaystyle \mathbb {N} } . This random variable describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. The memorylessness property asserts that the number of previously failed trials has no effect on the number of future trials needed for a success.

Geometric random variables can also be defined as taking values in N 0 {\displaystyle \mathbb {N} _{0}} , which describes the number of failed trials before the first success in a sequence of independent and identically distributed bernoulli trials. These random variables do not satisfy the memoryless condition stated above; however they do satisfy a slightly modified memoryless condition:

Pr ( X > m + n X m ) = Pr ( X > n ) . {\displaystyle \Pr(X>m+n\mid X\geq m)=\Pr(X>n).}

Similar to the first definition, only discrete random variables that satisfy this memoryless condition are geometric random variables taking values in N 0 {\displaystyle \mathbb {N} _{0}} . In the continuous case, these two definitions of memorylessness are equivalent.

Continuous memorylessness

If a continuous random variable X {\displaystyle X} is memoryless, then it satisfies Pr ( X > s + t X > t ) = Pr ( X > s ) {\displaystyle \Pr(X>s+t\mid X>t)=\Pr(X>s)} where s {\displaystyle s} and t {\displaystyle t} are nonnegative real numbers. The equality is still true when {\displaystyle \geq } is substituted.

The only continuous random variable that is memoryless is the exponential random variable. It models random processes like time between consecutive events. The memorylessness property asserts that the amount of time since the previous event has no effect on the future time until the next event occurs.

Exponential distribution and memorylessness proof

The only memoryless continuous probability distribution is the exponential distribution, shown in the following proof:

First, define S ( t ) = Pr ( X > t ) {\displaystyle S(t)=\Pr(X>t)} , also known as the distribution's survival function. From the memorylessness property and the definition of conditional probability, it follows that Pr ( X > t + s ) Pr ( X > t ) = Pr ( X > s ) {\displaystyle {\frac {\Pr(X>t+s)}{\Pr(X>t)}}=\Pr(X>s)}

This gives the functional equation S ( t + s ) = S ( t ) S ( s ) {\displaystyle S(t+s)=S(t)S(s)} which implies S ( p t ) = S ( t ) p {\displaystyle S(pt)=S(t)^{p}} where p {\displaystyle p} is a natural number. Similarly, S ( t q ) = S ( t ) 1 q {\displaystyle S\left({\frac {t}{q}}\right)=S(t)^{\frac {1}{q}}} where q {\displaystyle q} is a natural number, excluding 0 {\displaystyle 0} . Therefore, all rational numbers a = p q {\displaystyle a={\tfrac {p}{q}}} satisfy S ( a t ) = S ( t ) a {\displaystyle S(at)=S(t)^{a}} Since S {\displaystyle S} is continuous and the set of rational numbers is dense in the set of real numbers, S ( x t ) = S ( t ) x {\displaystyle S(xt)=S(t)^{x}} where x {\displaystyle x} is a nonnegative real number. When t = 1 {\displaystyle t=1} , S ( x ) = S ( 1 ) x {\displaystyle S(x)=S(1)^{x}} As a result, S ( x ) = e λ x {\displaystyle S(x)=e^{-\lambda x}} where λ = ln S ( 1 ) 0 {\displaystyle \lambda =-\ln S(1)\geq 0} .

References

  1. "Notes on Memoryless Random Variables" (PDF).
  2. Chattamvelli, Rajan; Shanmugam, Ramalingam (2020). Discrete Distributions in Engineering and the Applied Sciences. Synthesis Lectures on Mathematics & Statistics. Cham: Springer International Publishing. p. 71. doi:10.1007/978-3-031-02425-2. ISBN 978-3-031-01297-6.
  3. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Meester, Ludolf Erwin (2005). A Modern Introduction to Probability and Statistics. Springer Texts in Statistics. London: Springer London. p. 50. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1.
  4. Nagel, Werner; Steyer, Rolf (2017-04-04). Probability and Conditional Expectation: Fundamentals for the Empirical Sciences. Wiley Series in Probability and Statistics (1st ed.). Wiley. pp. 260–261. doi:10.1002/9781119243496. ISBN 978-1-119-24352-6.
  5. Weisstein, Eric W. "Memoryless". mathworld.wolfram.com. Retrieved 2024-07-25.
  6. Pitman, Jim (1993). Probability. New York, NY: Springer New York. p. 279. doi:10.1007/978-1-4612-4374-8. ISBN 978-0-387-94594-1.
  7. Brémaud, Pierre (2024). An Introduction to Applied Probability. Texts in Applied Mathematics. Vol. 77. Cham: Springer International Publishing. p. 84. doi:10.1007/978-3-031-49306-5. ISBN 978-3-031-49305-8.
  8. Bas, Esra (2019). Basics of Probability and Stochastic Processes. Cham: Springer International Publishing. p. 74. doi:10.1007/978-3-030-32323-3. ISBN 978-3-030-32322-6.
  9. Riposo, Julien (2023). Some Fundamentals of Mathematics of Blockchain. Cham: Springer Nature Switzerland. pp. 8–9. doi:10.1007/978-3-031-31323-3. ISBN 978-3-031-31322-6.
Categories: