Misplaced Pages

Cover's theorem

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Statement in computational learning theory

Cover's theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. It is so termed after the information theorist Thomas M. Cover who stated it in 1965, referring to it as counting function theorem.

The Theorem

The theorem expresses the number of homogeneously linearly separable sets of N {\displaystyle N} points in D {\displaystyle D} dimensions as an explicit counting function C ( N , D ) {\displaystyle C(N,D)} of the number of points N {\displaystyle N} and the dimensionality D {\displaystyle D} .

It requires, as a necessary and sufficient condition, that the points are in general position. Simply put, this means that the points should be as linearly independent (non-aligned) as possible. This condition is satisfied "with probability 1" or almost surely for random point sets, while it may easily be violated for real data, since these are often structured along smaller-dimensionality manifolds within the data space.

The function C ( N , D ) {\displaystyle C(N,D)} follows two different regimes depending on the relationship between N {\displaystyle N} and D {\displaystyle D} .

  • For N D + 1 {\displaystyle N\leq D+1} , the function is exponential in N {\displaystyle N} . This essentially means that any set of labelled points in general position and in number no larger than the dimensionality + 1 is linearly separable; in jargon, it is said that a linear classifier shatters any point set with N D + 1 {\displaystyle N\leq D+1} . This limiting quantity is also known as the Vapnik-Chervonenkis dimension of the linear classifier.
  • For N > D + 1 {\displaystyle N>D+1} , the counting function starts growing less than exponentially. This means that, given a sample of fixed size N {\displaystyle N} , for larger dimensionality D {\displaystyle D} it is more probable that a random set of labelled points is linearly separable. Conversely, with fixed dimensionality, for larger sample sizes the number of linearly separable sets of random points will be smaller, or in other words the probability to find a linearly separable sample will decrease with N {\displaystyle N} .

A consequence of the theorem is that given a set of training data that is not linearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into a higher-dimensional space via some non-linear transformation, or:

A complex pattern-classification problem, cast in a high-dimensional space nonlinearly, is more likely to be linearly separable than in a low-dimensional space, provided that the space is not densely populated.

Proof

The proof of Cover's counting function theorem can be obtained from the recursive relation

C ( N + 1 , D ) = C ( N , D ) + C ( N , D 1 ) . {\displaystyle C(N+1,D)=C(N,D)+C(N,D-1).}

To show that, with fixed N {\displaystyle N} , increasing D {\displaystyle D} may turn a set of points from non-separable to separable, a deterministic mapping may be used: suppose there are N {\displaystyle N} points. Lift them onto the vertices of the simplex in the N 1 {\displaystyle N-1} dimensional real space. Since every partition of the samples into two sets is separable by a linear separator, the property follows.

The left image shows 100 points in the two dimensional real space, labelled according to whether they are inside or outside the circular area. These labelled points are not linearly separable, but lifting them to the three dimensional space with the kernel trick, the points becomes linearly separable. Note that in this case and in many other cases it will not be necessary to lift the points to the 99 dimensional space as assumed in the explanation.

See also

References


Stub icon

This statistics-related article is a stub. You can help Misplaced Pages by expanding it.

Categories: