Misplaced Pages

Event (probability theory)

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Random event) In statistics and probability theory, set of outcomes to which a probability is assigned
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Event" probability theory – news · newspapers · books · scholar · JSTOR (January 2018) (Learn how and when to remove this message)
Part of a series on statistics
Probability theory

In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. A single outcome may be an element of many different events, and different events in an experiment are usually not equally likely, since they may include very different groups of outcomes. An event consisting of only a single outcome is called an elementary event or an atomic event; that is, it is a singleton set. An event that has more than one possible outcome is called a compound event. An event S {\displaystyle S} is said to occur if S {\displaystyle S} contains the outcome x {\displaystyle x} of the experiment (or trial) (that is, if x S {\displaystyle x\in S} ). The probability (with respect to some probability measure) that an event S {\displaystyle S} occurs is the probability that S {\displaystyle S} contains the outcome x {\displaystyle x} of an experiment (that is, it is the probability that x S {\displaystyle x\in S} ). An event defines a complementary event, namely the complementary set (the event not occurring), and together these define a Bernoulli trial: did the event occur or not?

Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see § Events in probability spaces, below).

A simple example

If we assemble a deck of 52 playing cards with no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each card is a possible outcome. An event, however, is any subset of the sample space, including any singleton set (an elementary event), the empty set (an impossible event, with probability zero) and the sample space itself (a certain event, with probability one). Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include:

An Euler diagram of an event. B {\displaystyle B} is the sample space and A {\displaystyle A} is an event.
By the ratio of their areas, the probability of A {\displaystyle A} is approximately 0.4.
  • "Red and black at the same time without being a joker" (0 elements),
  • "The 5 of Hearts" (1 element),
  • "A King" (4 elements),
  • "A Face card" (12 elements),
  • "A Spade" (13 elements),
  • "A Face card or a red suit" (32 elements),
  • "A card" (52 elements).

Since all events are sets, they are usually written as sets (for example, {1, 2, 3}), and represented graphically using Venn diagrams. In the situation where each outcome in the sample space Ω is equally likely, the probability P {\displaystyle P} of an event A {\displaystyle A} is the following formula: P ( A ) = | A | | Ω |   ( alternatively:   Pr ( A ) = | A | | Ω | ) {\displaystyle \mathrm {P} (A)={\frac {|A|}{|\Omega |}}\,\ \left({\text{alternatively:}}\ \Pr(A)={\frac {|A|}{|\Omega |}}\right)} This rule can readily be applied to each of the example events above.

Events in probability spaces

Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution, the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers 'badly behaved' sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a σ-algebra, that is, a family closed under complementation and countable unions of its members. The most natural choice of σ-algebra is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice.

In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected 𝜎-algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the 𝜎-algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all events of interest are elements of the 𝜎-algebra.

A note on notation

Even though events are subsets of some sample space Ω , {\displaystyle \Omega ,} they are often written as predicates or indicators involving random variables. For example, if X {\displaystyle X} is a real-valued random variable defined on the sample space Ω , {\displaystyle \Omega ,} the event { ω Ω u < X ( ω ) v } {\displaystyle \{\omega \in \Omega \mid u<X(\omega )\leq v\}\,} can be written more conveniently as, simply, u < X v . {\displaystyle u<X\leq v\,.} This is especially common in formulas for a probability, such as Pr ( u < X v ) = F ( v ) F ( u ) . {\displaystyle \Pr(u<X\leq v)=F(v)-F(u)\,.} The set u < X v {\displaystyle u<X\leq v} is an example of an inverse image under the mapping X {\displaystyle X} because ω X 1 ( ( u , v ] ) {\displaystyle \omega \in X^{-1}((u,v])} if and only if u < X ( ω ) v . {\displaystyle u<X(\omega )\leq v.}

See also

Notes

  1. Leon-Garcia, Alberto (2008). Probability, statistics and random processes for electrical engineering. Upper Saddle River, NJ: Pearson. ISBN 9780131471221.
  2. Pfeiffer, Paul E. (1978). Concepts of probability theory. Dover Publications. p. 18. ISBN 978-0-486-63677-1.
  3. Foerster, Paul A. (2006). Algebra and trigonometry: Functions and Applications, Teacher's edition (Classics ed.). Upper Saddle River, NJ: Prentice Hall. p. 634. ISBN 0-13-165711-9.
  4. Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuhaä, Hendrik Paul; Ludolf Erwin, Meester (2005). Dekking, Michel (ed.). A modern introduction to probability and statistics: understanding why and how. Springer texts in statistics. London : Springer. p. 14. doi:10.1007/1-84628-168-7. ISBN 978-1-85233-896-1.
  5. Širjaev, Alʹbert N. (2016). Probability-1. Graduate texts in mathematics. Translated by Boas, Ralph Philip; Chibisov, Dmitry (3rd ed.). New York Heidelberg Dordrecht London: Springer. ISBN 978-0-387-72205-4.

External links

Category: