Misplaced Pages

Odds algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Bruss algorithm) Method of computing optimal strategies for last-success problems

In decision theory, the odds algorithm (or Bruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds strategy, and the importance of the odds strategy lies in its optimality, as explained below.

The odds algorithm applies to a class of problems called last-success problems. Formally, the objective in these problems is to maximize the probability of identifying in a sequence of sequentially observed independent events the last event satisfying a specific criterion (a "specific event"). This identification must be done at the time of observation. No revisiting of preceding observations is permitted. Usually, a specific event is defined by the decision maker as an event that is of true interest in the view of "stopping" to take a well-defined action. Such problems are encountered in several situations.

Examples

Two different situations exemplify the interest in maximizing the probability to stop on a last specific event.

  1. Suppose a car is advertised for sale to the highest bidder (best "offer"). Let n {\displaystyle n} potential buyers respond and ask to see the car. Each insists upon an immediate decision from the seller to accept the bid, or not. Define a bid as interesting, and coded 1 if it is better than all preceding bids, and coded 0 otherwise. The bids will form a random sequence of 0s and 1s. Only 1s interest the seller, who may fear that each successive 1 might be the last. It follows from the definition that the very last 1 is the highest bid. Maximizing the probability of selling on the last 1 therefore means maximizing the probability of selling best.
  2. A physician, using a special treatment, may use the code 1 for a successful treatment, 0 otherwise. The physician treats a sequence of n {\displaystyle n} patients the same way, and wants to minimize any suffering, and to treat every responsive patient in the sequence. Stopping on the last 1 in such a random sequence of 0s and 1s would achieve this objective. Since the physician is no prophet, the objective is to maximize the probability of stopping on the last 1. (See Compassionate use.)

Definitions

Consider a sequence of n {\displaystyle n} independent events. Associate with this sequence another sequence of independent events I 1 , I 2 , , I n {\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}} with values 1 or 0. Here I k = 1 {\displaystyle \,I_{k}=1} , called a success, stands for the event that the kth observation is interesting (as defined by the decision maker), and I k = 0 {\displaystyle \,I_{k}=0} for non-interesting. These random variables I 1 , I 2 , , I n {\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}} are observed sequentially and the goal is to correctly select the last success when it is observed.

Let p k = P ( I k = 1 ) {\displaystyle \,p_{k}=P(\,I_{k}\,=1)} be the probability that the kth event is interesting. Further let q k = 1 p k {\displaystyle \,q_{k}=\,1-p_{k}} and r k = p k / q k {\displaystyle \,r_{k}=p_{k}/q_{k}} . Note that r k {\displaystyle \,r_{k}} represents the odds of the kth event turning out to be interesting, explaining the name of the odds algorithm.

Algorithmic procedure

The odds algorithm sums up the odds in reverse order

r n + r n 1 + r n 2 + , {\displaystyle r_{n}+r_{n-1}+r_{n-2}\,+\cdots ,\,}

until this sum reaches or exceeds the value 1 for the first time. If this happens at index s, it saves s and the corresponding sum

R s = r n + r n 1 + r n 2 + + r s . {\displaystyle R_{s}=\,r_{n}+r_{n-1}+r_{n-2}+\cdots +r_{s}.\,}

If the sum of the odds does not reach 1, it sets s = 1. At the same time it computes

Q s = q n q n 1 q s . {\displaystyle Q_{s}=q_{n}q_{n-1}\cdots q_{s}.\,}

The output is

  1. s {\displaystyle \,s} , the stopping threshold
  2. w = Q s R s {\displaystyle \,w=Q_{s}R_{s}} , the win probability.

Odds strategy

The odds strategy is the rule to observe the events one after the other and to stop on the first interesting event from index s onwards (if any), where s is the stopping threshold of output a.

The importance of the odds strategy, and hence of the odds algorithm, lies in the following odds theorem.

Odds theorem

The odds theorem states that

  1. The odds strategy is optimal, that is, it maximizes the probability of stopping on the last 1.
  2. The win probability of the odds strategy equals w = Q s R s {\displaystyle w=Q_{s}R_{s}}
  3. If R s 1 {\displaystyle R_{s}\geq 1} , the win probability w {\displaystyle w} is always at least 1/e = 0.367879..., and this lower bound is best possible.

Features

The odds algorithm computes the optimal strategy and the optimal win probability at the same time. Also, the number of operations of the odds algorithm is (sub)linear in n. Hence no quicker algorithm can possibly exist for all sequences, so that the odds algorithm is, at the same time, optimal as an algorithm.

Sources

Bruss 2000 devised the odds algorithm, and coined its name. It is also known as Bruss algorithm (strategy). Free implementations can be found on the web.

Applications

Applications reach from medical questions in clinical trials over sales problems, secretary problems, portfolio selection, (one way) search strategies, trajectory problems and the parking problem to problems in online maintenance and others.

There exists, in the same spirit, an Odds Theorem for continuous-time arrival processes with independent increments such as the Poisson process (Bruss 2000). In some cases, the odds are not necessarily known in advance (as in Example 2 above) so that the application of the odds algorithm is not directly possible. In this case each step can use sequential estimates of the odds. This is meaningful, if the number of unknown parameters is not large compared with the number n of observations. The question of optimality is then more complicated, however, and requires additional studies. Generalizations of the odds algorithm allow for different rewards for failing to stop and wrong stops as well as replacing independence assumptions by weaker ones (Ferguson 2008).

Variations

Bruss & Paindaveine 2000 discussed a problem of selecting the last k {\displaystyle k} successes.

Tamaki 2010 proved a multiplicative odds theorem which deals with a problem of stopping at any of the last {\displaystyle \ell } successes. A tight lower bound of win probability is obtained by Matsui & Ano 2014.

Matsui & Ano 2017 discussed a problem of selecting k {\displaystyle k} out of the last {\displaystyle \ell } successes and obtained a tight lower bound of win probability. When = k = 1 , {\displaystyle \ell =k=1,} the problem is equivalent to Bruss' odds problem. If = k 1 , {\displaystyle \ell =k\geq 1,} the problem is equivalent to that in Bruss & Paindaveine 2000. A problem discussed by Tamaki 2010 is obtained by setting k = 1. {\displaystyle \ell \geq k=1.}

Multiple choice problem

A player is allowed r {\displaystyle r} choices, and he wins if any choice is the last success. For classical secretary problem, Gilbert & Mosteller 1966 discussed the cases r = 2 , 3 , 4 {\displaystyle r=2,3,4} . The odds problem with r = 2 , 3 {\displaystyle r=2,3} is discussed by Ano, Kakinuma & Miyoshi 2010. For further cases of odds problem, see Matsui & Ano 2016.

An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers ( a 1 , a 2 , . . . , a r ) {\displaystyle (a_{1},a_{2},...,a_{r})} , where a 1 > a 2 > > a r {\displaystyle a_{1}>a_{2}>\cdots >a_{r}} .

Specifically, imagine that you have r {\displaystyle r} letters of acceptance labelled from 1 {\displaystyle 1} to r {\displaystyle r} . You would have r {\displaystyle r} application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officer i {\displaystyle i} would send their letter of acceptance to the first candidate that is better than all candidates 1 {\displaystyle 1} to a i {\displaystyle a_{i}} . (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.)

When r = 2 {\displaystyle r=2} , Ano, Kakinuma & Miyoshi 2010 showed that the tight lower bound of win probability is equal to e 1 + e 3 2 . {\displaystyle e^{-1}+e^{-{\frac {3}{2}}}.} For general positive integer r {\displaystyle r} , Matsui & Ano 2016 proved that the tight lower bound of win probability is the win probability of the secretary problem variant where one must pick the top-k candidates using just k attempts.

When r = 3 , 4 , 5 {\displaystyle r=3,4,5} , tight lower bounds of win probabilities are equal to e 1 + e 3 2 + e 47 24 {\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}} , e 1 + e 3 2 + e 47 24 + e 2761 1152 {\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}} and e 1 + e 3 2 + e 47 24 + e 2761 1152 + e 4162637 1474560 , {\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}+e^{-{\frac {4162637}{1474560}}},} respectively.

For further numerical cases for r = 6 , . . . , 10 {\displaystyle r=6,...,10} , and an algorithm for general cases, see Matsui & Ano 2016.

See also

References

External links

Categories: