Misplaced Pages

Positive-definite kernel

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Positive-definite kernel function) Generalization of a positive-definite matrix "Kernel function" redirects here. Not to be confused with Integral kernel.

In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solving integral operator equations. Since then, positive-definite functions and their various analogues and generalizations have arisen in diverse parts of mathematics. They occur naturally in Fourier analysis, probability theory, operator theory, complex function-theory, moment problems, integral equations, boundary-value problems for partial differential equations, machine learning, embedding problem, information theory, and other areas.

Definition

Let X {\displaystyle {\mathcal {X}}} be a nonempty set, sometimes referred to as the index set. A symmetric function K : X × X R {\displaystyle K:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} } is called a positive-definite (p.d.) kernel on X {\displaystyle {\mathcal {X}}} if

i = 1 n j = 1 n c i c j K ( x i , x j ) 0 {\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}c_{i}c_{j}K(x_{i},x_{j})\geq 0} (1.1)

holds for all x 1 , , x n X {\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}} , n N , c 1 , , c n R {\displaystyle n\in \mathbb {N} ,c_{1},\dots ,c_{n}\in \mathbb {R} } .

In probability theory, a distinction is sometimes made between positive-definite kernels, for which equality in (1.1) implies c i = 0 ( i ) {\displaystyle c_{i}=0\;(\forall i)} , and positive semi-definite (p.s.d.) kernels, which do not impose this condition. Note that this is equivalent to requiring that every finite matrix constructed by pairwise evaluation, K i j = K ( x i , x j ) {\displaystyle \mathbf {K} _{ij}=K(x_{i},x_{j})} , has either entirely positive (p.d.) or nonnegative (p.s.d.) eigenvalues.

In mathematical literature, kernels are usually complex-valued functions. That is, a complex-valued function K : X × X C {\displaystyle K:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {C} } is called a Hermitian kernel if K ( x , y ) = K ( y , x ) ¯ {\displaystyle K(x,y)={\overline {K(y,x)}}} and positive definite if for every finite set of points x 1 , , x n X {\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}} and any complex numbers ξ 1 , , ξ n C {\displaystyle \xi _{1},\dots ,\xi _{n}\in \mathbb {C} } ,

i = 1 n j = 1 n ξ i ξ ¯ j K ( x i , x j ) 0 {\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}\xi _{i}{\overline {\xi }}_{j}K(x_{i},x_{j})\geq 0}

where ξ ¯ j {\displaystyle {\overline {\xi }}_{j}} denotes the complex conjugate. In the rest of this article we assume real-valued functions, which is the common practice in applications of p.d. kernels.

Some general properties

  • For a family of p.d. kernels ( K i ) i N ,     K i : X × X R {\displaystyle (K_{i})_{i\in \mathbb {N} },\ \ K_{i}:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }
    • The conical sum i = 1 n λ i K i {\displaystyle \sum _{i=1}^{n}\lambda _{i}K_{i}} is p.d., given λ 1 , , λ n 0 {\displaystyle \lambda _{1},\dots ,\lambda _{n}\geq 0}
    • The product K 1 a 1 K n a n {\displaystyle K_{1}^{a_{1}}\dots K_{n}^{a_{n}}} is p.d., given a 1 , , a n N {\displaystyle a_{1},\dots ,a_{n}\in \mathbb {N} }
    • The limit K = lim n K n {\displaystyle K=\lim _{n\to \infty }K_{n}} is p.d. if the limit exists.
  • If ( X i ) i = 1 n {\displaystyle ({\mathcal {X}}_{i})_{i=1}^{n}} is a sequence of sets, and ( K i ) i = 1 n ,     K i : X i × X i R {\displaystyle (K_{i})_{i=1}^{n},\ \ K_{i}:{\mathcal {X}}_{i}\times {\mathcal {X}}_{i}\to \mathbb {R} } a sequence of p.d. kernels, then both K ( ( x 1 , , x n ) , ( y 1 , , y n ) ) = i = 1 n K i ( x i , y i ) {\displaystyle K((x_{1},\dots ,x_{n}),(y_{1},\dots ,y_{n}))=\prod _{i=1}^{n}K_{i}(x_{i},y_{i})} and K ( ( x 1 , , x n ) , ( y 1 , , y n ) ) = i = 1 n K i ( x i , y i ) {\displaystyle K((x_{1},\dots ,x_{n}),(y_{1},\dots ,y_{n}))=\sum _{i=1}^{n}K_{i}(x_{i},y_{i})} are p.d. kernels on X = X 1 × × X n {\displaystyle {\mathcal {X}}={\mathcal {X}}_{1}\times \dots \times {\mathcal {X}}_{n}} .
  • Let X 0 X {\displaystyle {\mathcal {X}}_{0}\subset {\mathcal {X}}} . Then the restriction K 0 {\displaystyle K_{0}} of K {\displaystyle K} to X 0 × X 0 {\displaystyle {\mathcal {X}}_{0}\times {\mathcal {X}}_{0}} is also a p.d. kernel.

Examples of p.d. kernels

  • Common examples of p.d. kernels defined on Euclidean space R d {\displaystyle \mathbb {R} ^{d}} include:
    • Linear kernel: K ( x , y ) = x T y , x , y R d {\displaystyle K(\mathbf {x} ,\mathbf {y} )=\mathbf {x} ^{T}\mathbf {y} ,\quad \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{d}} .
    • Polynomial kernel: K ( x , y ) = ( x T y + r ) n , x , y R d , r 0 , n 1 {\displaystyle K(\mathbf {x} ,\mathbf {y} )=(\mathbf {x} ^{T}\mathbf {y} +r)^{n},\quad \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{d},r\geq 0,n\geq 1} .
    • Gaussian kernel (RBF kernel): K ( x , y ) = e x y 2 2 σ 2 , x , y R d , σ > 0 {\displaystyle K(\mathbf {x} ,\mathbf {y} )=e^{-{\frac {\|\mathbf {x} -\mathbf {y} \|^{2}}{2\sigma ^{2}}}},\quad \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{d},\sigma >0} .
    • Laplacian kernel: K ( x , y ) = e α x y , x , y R d , α > 0 {\displaystyle K(\mathbf {x} ,\mathbf {y} )=e^{-\alpha \|\mathbf {x} -\mathbf {y} \|},\quad \mathbf {x} ,\mathbf {y} \in \mathbb {R} ^{d},\alpha >0} .
    • Abel kernel: K ( x , y ) = e α | x y | , x , y R , α > 0 {\displaystyle K(x,y)=e^{-\alpha |x-y|},\quad x,y\in \mathbb {R} ,\alpha >0} .
    • Kernel generating Sobolev spaces W 2 k ( R d ) {\displaystyle W_{2}^{k}(\mathbb {R} ^{d})} : K ( x , y ) = x y 2 k d 2 B k d 2 ( x y 2 ) {\displaystyle K(x,y)=\|x-y\|_{2}^{k-{\frac {d}{2}}}B_{k-{\frac {d}{2}}}(\|x-y\|_{2})} , where B ν {\displaystyle B_{\nu }} is the Bessel function of the third kind.
    • Kernel generating Paley–Wiener space: K ( x , y ) = sinc ( α ( x y ) ) , x , y R , α > 0 {\displaystyle K(x,y)=\operatorname {sinc} (\alpha (x-y)),\quad x,y\in \mathbb {R} ,\alpha >0} .
  • If H {\displaystyle H} is a Hilbert space, then its corresponding inner product ( , ) H : H × H R {\displaystyle (\cdot ,\cdot )_{H}:H\times H\to \mathbb {R} } is a p.d. kernel. Indeed, we have i , j = 1 n c i c j ( x i , x j ) H = ( i = 1 n c i x i , j = 1 n c j x j ) H = i = 1 n c i x i H 2 0 {\displaystyle \sum _{i,j=1}^{n}c_{i}c_{j}(x_{i},x_{j})_{H}=\left(\sum _{i=1}^{n}c_{i}x_{i},\sum _{j=1}^{n}c_{j}x_{j}\right)_{H}=\left\|\sum _{i=1}^{n}c_{i}x_{i}\right\|_{H}^{2}\geq 0}
  • Kernels defined on R + d {\displaystyle \mathbb {R} _{+}^{d}} and histograms: Histograms are frequently encountered in applications of real-life problems. Most observations are usually available under the form of nonnegative vectors of counts, which, if normalized, yield histograms of frequencies. It has been shown that the following family of squared metrics, respectively Jensen divergence, the χ {\displaystyle \chi } -square, Total Variation, and two variations of the Hellinger distance: ψ J D = H ( θ + θ 2 ) H ( θ ) + H ( θ ) 2 , {\displaystyle \psi _{JD}=H\left({\frac {\theta +\theta '}{2}}\right)-{\frac {H(\theta )+H(\theta ')}{2}},} ψ χ 2 = i ( θ i θ i ) 2 θ i + θ i , ψ T V = i | θ i θ i | , {\displaystyle \psi _{\chi ^{2}}=\sum _{i}{\frac {(\theta _{i}-\theta _{i}')^{2}}{\theta _{i}+\theta _{i}'}},\quad \psi _{TV}=\sum _{i}\left|\theta _{i}-\theta _{i}'\right|,} ψ H 1 = i | θ i θ i | , ψ H 2 = i | θ i θ i | 2 , {\displaystyle \psi _{H_{1}}=\sum _{i}\left|{\sqrt {\theta _{i}}}-{\sqrt {\theta _{i}'}}\right|,\psi _{H_{2}}=\sum _{i}\left|{\sqrt {\theta _{i}}}-{\sqrt {\theta _{i}'}}\right|^{2},} can be used to define p.d. kernels using the following formula K ( θ , θ ) = e α ψ ( θ , θ ) , α > 0. {\displaystyle K(\theta ,\theta ')=e^{-\alpha \psi (\theta ,\theta ')},\alpha >0.}

History

See also: Mercer's theorem

Positive-definite kernels, as defined in (1.1), appeared first in 1909 in a paper on integral equations by James Mercer. Several other authors made use of this concept in the following two decades, but none of them explicitly used kernels K ( x , y ) = f ( x y ) {\displaystyle K(x,y)=f(x-y)} , i.e. p.d. functions (indeed M. Mathias and S. Bochner seem not to have been aware of the study of p.d. kernels). Mercer’s work arose from Hilbert’s paper of 1904 on Fredholm integral equations of the second kind:

f ( s ) = φ ( s ) λ a b K ( s , t ) φ ( t )   d t . {\displaystyle f(s)=\varphi (s)-\lambda \int _{a}^{b}K(s,t)\varphi (t)\ \mathrm {d} t.} (1.2)

In particular, Hilbert had shown that

a b a b K ( s , t ) x ( s ) x ( t )   d s d t = 1 λ n [ a b ψ n ( s ) x ( s ) d s ] 2 , {\displaystyle \int _{a}^{b}\int _{a}^{b}K(s,t)x(s)x(t)\ \mathrm {d} s\,\mathrm {d} t=\sum {\frac {1}{\lambda _{n}}}\left^{2},} (1.3)

where K {\displaystyle K} is a continuous real symmetric kernel, x {\displaystyle x} is continuous, { ψ n } {\displaystyle \{\psi _{n}\}} is a complete system of orthonormal eigenfunctions, and λ n {\displaystyle \lambda _{n}} ’s are the corresponding eigenvalues of (1.2). Hilbert defined a “definite” kernel as one for which the double integral J ( x ) = a b a b K ( s , t ) x ( s ) x ( t )   d s d t {\displaystyle J(x)=\int _{a}^{b}\int _{a}^{b}K(s,t)x(s)x(t)\ \mathrm {d} s\;\mathrm {d} t} satisfies J ( x ) > 0 {\displaystyle J(x)>0} except for x ( t ) = 0 {\displaystyle x(t)=0} . The original object of Mercer’s paper was to characterize the kernels which are definite in the sense of Hilbert, but Mercer soon found that the class of such functions was too restrictive to characterize in terms of determinants. He therefore defined a continuous real symmetric kernel K ( s , t ) {\displaystyle K(s,t)} to be of positive type (i.e. positive-definite) if J ( x ) 0 {\displaystyle J(x)\geq 0} for all real continuous functions x {\displaystyle x} on [ a , b ] {\displaystyle } , and he proved that (1.1) is a necessary and sufficient condition for a kernel to be of positive type. Mercer then proved that for any continuous p.d. kernel the expansion K ( s , t ) = n ψ n ( s ) ψ n ( t ) λ n {\displaystyle K(s,t)=\sum _{n}{\frac {\psi _{n}(s)\psi _{n}(t)}{\lambda _{n}}}} holds absolutely and uniformly.

At about the same time W. H. Young, motivated by a different question in the theory of integral equations, showed that for continuous kernels condition (1.1) is equivalent to J ( x ) 0 {\displaystyle J(x)\geq 0} for all x L 1 [ a , b ] {\displaystyle x\in L^{1}} .

E.H. Moore initiated the study of a very general kind of p.d. kernel. If E {\displaystyle E} is an abstract set, he calls functions K ( x , y ) {\displaystyle K(x,y)} defined on E × E {\displaystyle E\times E} “positive Hermitian matrices” if they satisfy (1.1) for all x i E {\displaystyle x_{i}\in E} . Moore was interested in generalization of integral equations and showed that to each such K {\displaystyle K} there is a Hilbert space H {\displaystyle H} of functions such that, for each f H , f ( y ) = ( f , K ( , y ) ) H {\displaystyle f\in H,f(y)=(f,K(\cdot ,y))_{H}} . This property is called the reproducing property of the kernel and turns out to have importance in the solution of boundary-value problems for elliptic partial differential equations.

Another line of development in which p.d. kernels played a large role was the theory of harmonics on homogeneous spaces as begun by E. Cartan in 1929, and continued by H. Weyl and S. Ito. The most comprehensive theory of p.d. kernels in homogeneous spaces is that of M. Krein which includes as special cases the work on p.d. functions and irreducible unitary representations of locally compact groups.

In probability theory, p.d. kernels arise as covariance kernels of stochastic processes.

Connection with reproducing kernel Hilbert spaces and feature maps

Further information: Reproducing kernel Hilbert space

Positive-definite kernels provide a framework that encompasses some basic Hilbert space constructions. In the following we present a tight relationship between positive-definite kernels and two mathematical objects, namely reproducing Hilbert spaces and feature maps.

Let X {\displaystyle X} be a set, H {\displaystyle H} a Hilbert space of functions f : X R {\displaystyle f:X\to \mathbb {R} } , and ( , ) H : H × H R {\displaystyle (\cdot ,\cdot )_{H}:H\times H\to \mathbb {R} } the corresponding inner product on H {\displaystyle H} . For any x X {\displaystyle x\in X} the evaluation functional e x : H R {\displaystyle e_{x}:H\to \mathbb {R} } is defined by f e x ( f ) = f ( x ) {\displaystyle f\mapsto e_{x}(f)=f(x)} . We first define a reproducing kernel Hilbert space (RKHS):

Definition: Space H {\displaystyle H} is called a reproducing kernel Hilbert space if the evaluation functionals are continuous.

Every RKHS has a special function associated to it, namely the reproducing kernel:

Definition: Reproducing kernel is a function K : X × X R {\displaystyle K:X\times X\to \mathbb {R} } such that

  1. K x ( ) H , x X {\displaystyle K_{x}(\cdot )\in H,\forall x\in X} , and
  2. ( f , K x ) = f ( x ) {\displaystyle (f,K_{x})=f(x)} , for all f H {\displaystyle f\in H} and x X {\displaystyle x\in X} .

The latter property is called the reproducing property.

The following result shows equivalence between RKHS and reproducing kernels:

Theorem —  Every reproducing kernel K {\displaystyle K} induces a unique RKHS, and every RKHS has a unique reproducing kernel.

Now the connection between positive definite kernels and RKHS is given by the following theorem

Theorem —  Every reproducing kernel is positive-definite, and every positive definite kernel defines a unique RKHS, of which it is the unique reproducing kernel.

Thus, given a positive-definite kernel K {\displaystyle K} , it is possible to build an associated RKHS with K {\displaystyle K} as a reproducing kernel.

As stated earlier, positive definite kernels can be constructed from inner products. This fact can be used to connect p.d. kernels with another interesting object that arises in machine learning applications, namely the feature map. Let F {\displaystyle F} be a Hilbert space, and ( , ) F {\displaystyle (\cdot ,\cdot )_{F}} the corresponding inner product. Any map Φ : X F {\displaystyle \Phi :X\to F} is called a feature map. In this case we call F {\displaystyle F} the feature space. It is easy to see that every feature map defines a unique p.d. kernel by K ( x , y ) = ( Φ ( x ) , Φ ( y ) ) F . {\displaystyle K(x,y)=(\Phi (x),\Phi (y))_{F}.} Indeed, positive definiteness of K {\displaystyle K} follows from the p.d. property of the inner product. On the other hand, every p.d. kernel, and its corresponding RKHS, have many associated feature maps. For example: Let F = H {\displaystyle F=H} , and Φ ( x ) = K x {\displaystyle \Phi (x)=K_{x}} for all x X {\displaystyle x\in X} . Then ( Φ ( x ) , Φ ( y ) ) F = ( K x , K y ) H = K ( x , y ) {\displaystyle (\Phi (x),\Phi (y))_{F}=(K_{x},K_{y})_{H}=K(x,y)} , by the reproducing property. This suggests a new look at p.d. kernels as inner products in appropriate Hilbert spaces, or in other words p.d. kernels can be viewed as similarity maps which quantify effectively how similar two points x {\displaystyle x} and y {\displaystyle y} are through the value K ( x , y ) {\displaystyle K(x,y)} . Moreover, through the equivalence of p.d. kernels and its corresponding RKHS, every feature map can be used to construct a RKHS.

Kernels and distances

Kernel methods are often compared to distance based methods such as nearest neighbors. In this section we discuss parallels between their two respective ingredients, namely kernels K {\displaystyle K} and distances d {\displaystyle d} .

Here by a distance function between each pair of elements of some set X {\displaystyle X} , we mean a metric defined on that set, i.e. any nonnegative-valued function d {\displaystyle d} on X × X {\displaystyle {\mathcal {X}}\times {\mathcal {X}}} which satisfies

  • d ( x , y ) 0 {\displaystyle d(x,y)\geq 0} , and d ( x , y ) = 0 {\displaystyle d(x,y)=0} if and only if x = y {\displaystyle x=y} ,
  • d ( x , y ) = d ( y , x ) , {\displaystyle d(x,y)=d(y,x),}
  • d ( x , z ) d ( x , y ) + d ( y , z ) . {\displaystyle d(x,z)\leq d(x,y)+d(y,z).}

One link between distances and p.d. kernels is given by a particular kind of kernel, called a negative definite kernel, and defined as follows

Definition: A symmetric function ψ : X × X R {\displaystyle \psi :{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} } is called a negative definite (n.d.) kernel on X {\displaystyle {\mathcal {X}}} if

i , j = 1 n c i c j ψ ( x i , x j ) 0 {\displaystyle \sum _{i,j=1}^{n}c_{i}c_{j}\psi (x_{i},x_{j})\leq 0} (1.4)

holds for any n N , x 1 , , x n X , {\displaystyle n\in \mathbb {N} ,x_{1},\dots ,x_{n}\in {\mathcal {X}},} and c 1 , , c n R {\displaystyle c_{1},\dots ,c_{n}\in \mathbb {R} } such that i = 1 n c i = 0 {\textstyle \sum _{i=1}^{n}c_{i}=0} .

The parallel between n.d. kernels and distances is in the following: whenever a n.d. kernel vanishes on the set { ( x , x ) : x X } {\displaystyle \{(x,x):x\in {\mathcal {X}}\}} , and is zero only on this set, then its square root is a distance for X {\displaystyle {\mathcal {X}}} . At the same time each distance does not correspond necessarily to a n.d. kernel. This is only true for Hilbertian distances, where distance d {\displaystyle d} is called Hilbertian if one can embed the metric space ( X , d ) {\displaystyle ({\mathcal {X}},d)} isometrically into some Hilbert space.

On the other hand, n.d. kernels can be identified with a subfamily of p.d. kernels known as infinitely divisible kernels. A nonnegative-valued kernel K {\displaystyle K} is said to be infinitely divisible if for every n N {\displaystyle n\in \mathbb {N} } there exists a positive-definite kernel K n {\displaystyle K_{n}} such that K = ( K n ) n {\displaystyle K=(K_{n})^{n}} .

Another link is that a p.d. kernel induces a pseudometric, where the first constraint on the distance function is loosened to allow d ( x , y ) = 0 {\displaystyle d(x,y)=0} for x y {\displaystyle x\neq y} . Given a positive-definite kernel K {\displaystyle K} , we can define a distance function as: d ( x , y ) = K ( x , x ) 2 K ( x , y ) + K ( y , y ) {\displaystyle d(x,y)={\sqrt {K(x,x)-2K(x,y)+K(y,y)}}}

Some applications

Kernels in machine learning

Further information: Kernel method

Positive-definite kernels, through their equivalence with reproducing kernel Hilbert spaces (RKHS), are particularly important in the field of statistical learning theory because of the celebrated representer theorem which states that every minimizer function in an RKHS can be written as a linear combination of the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies the empirical risk minimization problem from an infinite dimensional to a finite dimensional optimization problem.

Kernels in probabilistic models

There are several different ways in which kernels arise in probability theory.

  • Nondeterministic recovery problems: Assume that we want to find the response f ( x ) {\displaystyle f(x)} of an unknown model function f {\displaystyle f} at a new point x {\displaystyle x} of a set X {\displaystyle {\mathcal {X}}} , provided that we have a sample of input-response pairs ( x i , f i ) = ( x i , f ( x i ) ) {\displaystyle (x_{i},f_{i})=(x_{i},f(x_{i}))} given by observation or experiment. The response f i {\displaystyle f_{i}} at x i {\displaystyle x_{i}} is not a fixed function of x i {\displaystyle x_{i}} but rather a realization of a real-valued random variable Z ( x i ) {\displaystyle Z(x_{i})} . The goal is to get information about the function E [ Z ( x i ) ] {\displaystyle E} which replaces f {\displaystyle f} in the deterministic setting. For two elements x , y X {\displaystyle x,y\in {\mathcal {X}}} the random variables Z ( x ) {\displaystyle Z(x)} and Z ( y ) {\displaystyle Z(y)} will not be uncorrelated, because if x {\displaystyle x} is too close to y {\displaystyle y} the random experiments described by Z ( x ) {\displaystyle Z(x)} and Z ( y ) {\displaystyle Z(y)} will often show similar behaviour. This is described by a covariance kernel K ( x , y ) = E [ Z ( x ) Z ( y ) ] {\displaystyle K(x,y)=E} . Such a kernel exists and is positive-definite under weak additional assumptions. Now a good estimate for Z ( x ) {\displaystyle Z(x)} can be obtained by using kernel interpolation with the covariance kernel, ignoring the probabilistic background completely.

Assume now that a noise variable ϵ ( x ) {\displaystyle \epsilon (x)} , with zero mean and variance σ 2 {\displaystyle \sigma ^{2}} , is added to x {\displaystyle x} , such that the noise is independent for different x {\displaystyle x} and independent of Z {\displaystyle Z} there, then the problem of finding a good estimate for f {\displaystyle f} is identical to the above one, but with a modified kernel given by K ( x , y ) = E [ Z ( x ) Z ( y ) ] + σ 2 δ x y {\displaystyle K(x,y)=E+\sigma ^{2}\delta _{xy}} .

  • Density estimation by kernels: The problem is to recover the density f {\displaystyle f} of a multivariate distribution over a domain X {\displaystyle {\mathcal {X}}} , from a large sample x 1 , , x n X {\displaystyle x_{1},\dots ,x_{n}\in {\mathcal {X}}} including repetitions. Where sampling points lie dense, the true density function must take large values. A simple density estimate is possible by counting the number of samples in each cell of a grid, and plotting the resulting histogram, which yields a piecewise constant density estimate. A better estimate can be obtained by using a nonnegative translation invariant kernel K {\displaystyle K} , with total integral equal to one, and define f ( x ) = 1 n i = 1 n K ( x x i h ) {\displaystyle f(x)={\frac {1}{n}}\sum _{i=1}^{n}K\left({\frac {x-x_{i}}{h}}\right)} as a smooth estimate.

Numerical solution of partial differential equations

Further information: Meshfree methods

One of the greatest application areas of so-called meshfree methods is in the numerical solution of PDEs. Some of the popular meshfree methods are closely related to positive-definite kernels (such as meshless local Petrov Galerkin (MLPG), Reproducing kernel particle method (RKPM) and smoothed-particle hydrodynamics (SPH)). These methods use radial basis kernel for collocation.

Stinespring dilation theorem

Further information: Stinespring dilation theorem

Other applications

In the literature on computer experiments and other engineering experiments, one increasingly encounters models based on p.d. kernels, RBFs or kriging. One such topic is response surface methodology. Other types of applications that boil down to data fitting are rapid prototyping and computer graphics. Here one often uses implicit surface models to approximate or interpolate point cloud data.

Applications of p.d. kernels in various other branches of mathematics are in multivariate integration, multivariate optimization, and in numerical analysis and scientific computing, where one studies fast, accurate and adaptive algorithms ideally implemented in high-performance computing environments.

See also

References

  1. Berezanskij, Jurij Makarovič (1968). Expansions in eigenfunctions of selfadjoint operators. Providence, RI: American Mathematical Soc. pp. 45–47. ISBN 978-0-8218-1567-0.
  2. Hein, M. and Bousquet, O. (2005). "Hilbertian metrics and positive definite kernels on probability measures". In Ghahramani, Z. and Cowell, R., editors, Proceedings of AISTATS 2005.
  3. Mercer, J. (1909). “Functions of positive and negative type and their connection with the theory of integral equations”. Philosophical Transactions of the Royal Society of London, Series A 209, pp. 415–446.
  4. Hilbert, D. (1904). "Grundzuge einer allgemeinen Theorie der linearen Integralgleichungen I", Gott. Nachrichten, math.-phys. K1 (1904), pp. 49–91.
  5. Young, W. H. (1909). "A note on a class of symmetric functions and on a theorem required in the theory of integral equations", Philos. Trans. Roy.Soc. London, Ser. A, 209, pp. 415–446.
  6. Moore, E.H. (1916). "On properly positive Hermitian matrices", Bull. Amer. Math. Soc. 23, 59, pp. 66–67.
  7. Moore, E.H. (1935). "General Analysis, Part I", Memoirs Amer. Philos. Soc. 1, Philadelphia.
  8. Krein. M (1949/1950). "Hermitian-positive kernels on homogeneous spaces I and II" (in Russian), Ukrain. Mat. Z. 1(1949), pp. 64–98, and 2(1950), pp. 10–59. English translation: Amer. Math. Soc. Translations Ser. 2, 34 (1963), pp. 69–164.
  9. Loève, M. (1960). "Probability theory", 2nd ed., Van Nostrand, Princeton, N.J.
  10. Rosasco, L. and Poggio, T. (2015). "A Regularization Tour of Machine Learning – MIT 9.520 Lecture Notes" Manuscript.
  11. Berg, C., Christensen, J. P. R., and Ressel, P. (1984). "Harmonic Analysis on Semigroups". Number 100 in Graduate Texts in Mathematics, Springer Verlag.
  12. Schaback, R. and Wendland, H. (2006). "Kernel Techniques: From Machine Learning to Meshless Methods", Cambridge University Press, Acta Numerica (2006), pp. 1–97.
  13. Haaland, B. and Qian, P. Z. G. (2010). "Accurate emulators for large-scale computer experiments", Ann. Stat.
  14. Gumerov, N. A. and Duraiswami, R. (2007). "Fast radial basis function interpolation via preconditioned Krylov iteration". SIAM J. Scient. Computing 29/5, pp. 1876–1899.
Categories: