Misplaced Pages

Calculus on finite weighted graphs

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Type of discrete calculus

In mathematics, calculus on finite weighted graphs is a discrete calculus for functions whose domain is the vertex set of a graph with a finite number of vertices and weights associated to the edges. This involves formulating discrete operators on graphs which are analogous to differential operators in calculus, such as graph Laplacians (or discrete Laplace operators) as discrete versions of the Laplacian, and using these operators to formulate differential equations, difference equations, or variational models on graphs which can be interpreted as discrete versions of partial differential equations or continuum variational models. Such equations and models are important tools to mathematically model, analyze, and process discrete information in many different research fields, e.g., image processing, machine learning, and network analysis.

In applications, finite weighted graphs represent a finite number of entities by the graph's vertices, any pairwise relationships between these entities by graph edges, and the significance of a relationship by an edge weight function. Differential equations or difference equations on such graphs can be employed to leverage the graph's structure for tasks such as image segmentation (where the vertices represent pixels and the weighted edges encode pixel similarity based on comparisons of Moore neighborhoods or larger windows), data clustering, data classification, or community detection in a social network (where the vertices represent users of the network, the edges represent links between users, and the weight function indicates the strength of interactions between users).

The main advantage of finite weighted graphs is that by not being restricted to highly regular structures such as discrete regular grids, lattice graphs, or meshes, they can be applied to represent abstract data with irregular interrelationships.

If a finite weighted graph is geometrically embedded in a Euclidean space, i.e., the graph vertices represent points of this space, then it can be interpreted as a discrete approximation of a related nonlocal operator in the continuum setting.

Basic definitions

A finite weighted graph G {\displaystyle G} is defined as a triple G = ( V , E , w ) {\displaystyle G=(V,E,w)} for which

  • V = { x 1 , , x n } , n N {\displaystyle V=\{x_{1},\dots ,x_{n}\},n\in \mathbb {N} } , is a finite set of indices denoted as graph vertices or nodes,
  • E V × V {\displaystyle E\subset V\times V} is a finite set of (directed) graph edges connecting a subset of vertices,
  • w : E R {\displaystyle w\colon E\rightarrow \mathbb {R} } is an edge weight function defined on the edges of the graph.

In a directed graph, each edge ( x i , x j ) E {\displaystyle (x_{i},x_{j})\in E} has a start node x i V {\displaystyle x_{i}\in V} and an end node x j V {\displaystyle x_{j}\in V} . In an undirected graph for every edge ( x i , x j ) {\displaystyle (x_{i},x_{j})} there exists an edge ( x j , x i ) {\displaystyle (x_{j},x_{i})} and the weight function is required to be symmetric, i.e., w ( x i , x j ) = w ( x j , x i ) {\displaystyle w(x_{i},x_{j})=w(x_{j},x_{i})} . On the remainder of this page, the graphs will be assumed to be undirected, unless specifically stated otherwise. Many of the ideas presented on this page can be generalized to directed graphs.

The edge weight function w {\displaystyle w} associates to every edge ( x i , x j ) E {\displaystyle (x_{i},x_{j})\in E} a real value w ( x i , x j ) > 0 {\displaystyle w(x_{i},x_{j})>0} . For both mathematical and application specific reasons, the weight function on the edges is often required to be strictly positive and on this page it will be assumed to be so unless specifically stated otherwise. Generalizations of many of the ideas presented on this page to include negatively weighted edges are possible. Sometimes an extension of the domain of the edge weight function to V × V {\displaystyle V\times V} is considered (with the resulting function still being called the edge weight function) by setting w ( x i , x j ) = 0 {\displaystyle w(x_{i},x_{j})=0} whenever ( x i , x j ) E {\displaystyle (x_{i},x_{j})\not \in E} .

In applications each graph vertex x V {\displaystyle x\in V} usually represents a single entity in the given data, e.g., elements of a finite data set, pixels in an image, or users in a social network. A graph edge represents a relationship between two entities, e.g. pairwise interactions or similarity based on comparisons of geometric neighborhoods (for example of pixels in images) or of another feature, with the edge weight encoding the strength of this relationship. Most commonly used weight functions are normalized to map to values between 0 and 1, i.e., w : E ( 0 , 1 ] {\displaystyle w:E\rightarrow (0,1]} .

In the following it is assumed that the considered graphs are connected without self-loops or multiple edges between vertices. These assumptions are mostly harmless as in many applications each connected component of a disconnected graph can be treated as a graph in its own right, each appearance of w ( x i , x i ) {\displaystyle w(x_{i},x_{i})} (which would be nonzero in the presence of self-loops) appears in the presence of another factor which disappears when i = j {\displaystyle i=j} (see the section on differential graph operators below), and edge weights can encode similar information as multiple edges could.

Neighborhood

A node x j V {\displaystyle x_{j}\in V} is a neighbor of the node x i V {\displaystyle x_{i}\in V} if there exists an edge ( x i , x j ) E {\displaystyle (x_{i},x_{j})\in E} . In terms of notation this relationship can be abbreviated by x j x i {\displaystyle x_{j}\sim x_{i}} , which should be read as " x j {\displaystyle x_{j}} is a neighbor of x i {\displaystyle x_{i}} ". Otherwise, if x j {\displaystyle x_{j}} is not a neighbor of x i {\displaystyle x_{i}} one writes x j x i {\displaystyle x_{j}\not \sim x_{i}} . The neighborhood N ( x i ) {\displaystyle {\mathcal {N}}(x_{i})} of a vertex x i V {\displaystyle x_{i}\in V} is simply the set of neighbors N ( x i ) := { x j V : x j x i } {\displaystyle {\mathcal {N}}(x_{i}):=\{x_{j}\in V\colon x_{j}\sim x_{i}\}} . The degree of a vertex x i V {\displaystyle x_{i}\in V} is the weighted size of its neighborhood:

deg ( x i ) := j : x j x i w ( x i , x j ) . {\displaystyle \deg(x_{i}):=\sum _{j\,:\,x_{j}\sim x_{i}}w(x_{i},x_{j}).}

Note that in the special case where w 1 {\displaystyle w\equiv 1} on E {\displaystyle E} (i.e. the graph is unweighted) we have deg ( x i ) := | N ( x i ) | {\displaystyle \deg(x_{i}):=|{\mathcal {N}}(x_{i})|} .

Space of real vertex functions

Let H ( V ) := { f : V R } {\displaystyle {\mathcal {H}}(V):=\{f:V\rightarrow \mathbb {R} \}} be the space of (real) vertex functions. Since V {\displaystyle V} is a finite set, any vertex function f H ( V ) {\displaystyle f\in {\mathcal {H}}(V)} can be represented as a n {\displaystyle n} -dimensional vector f R n {\displaystyle f\in \mathbb {R} ^{n}} (where n := | V | {\displaystyle n:=|V|} ) and hence the space of vertex functions H ( V ) {\displaystyle {\mathcal {H}}(V)} can be identified with an n {\displaystyle n} -dimensional Hilbert space: H ( V ) R n {\displaystyle {\mathcal {H}}(V)\cong \mathbb {R} ^{n}} . The inner product of H ( V ) {\displaystyle {\mathcal {H}}(V)} is defined as:

f , g H ( V ) := x i V f ( x i ) g ( x i ) , f , g H ( V ) . {\displaystyle \langle f,g\rangle _{{\mathcal {H}}(V)}:=\sum _{x_{i}\in V}f(x_{i})g(x_{i}),\quad \forall f,g\in {\mathcal {H}}(V).}

Furthermore, for any vertex function f H ( V ) {\displaystyle f\in {\mathcal {H}}(V)} the p {\displaystyle \ell _{p}} -norm and {\displaystyle \ell _{\infty }} -norm of f {\displaystyle f} are defined as:

f p = { ( x i V | f ( x i ) | p ) 1 p ,  for  1 p <   , max x i V | f ( x i ) | ,  for  p =   . {\displaystyle \|f\|_{p}={\begin{cases}\left(\sum _{x_{i}\in V}|f(x_{i})|^{p}\right)^{\frac {1}{p}},&{\text{ for }}1\leqslant p<\infty \ ,\\\max _{x_{i}\in V}|f(x_{i})|,&{\text{ for }}p=\infty \ .\end{cases}}}

The 2 {\displaystyle \ell _{2}} -norm is induced by the inner product.

In applications vertex functions are useful to label the vertices of the nodes. For example, in graph-based data clustering, each node represents a data point and a vertex function is used to identify cluster membership of the nodes.

Space of real edge functions

Analogously to real vertex functions, one can introduce the space of real edge functions H ( E ) := { F : E R } {\displaystyle {\mathcal {H}}(E):=\{F:E\rightarrow \mathbb {R} \}} . As any edge function F {\displaystyle F} is defined on a finite set of edges E {\displaystyle E} , it can be represented as a m {\displaystyle m} -dimensional vector F R m {\displaystyle F\in \mathbb {R} ^{m}} , where m := | E | {\displaystyle m:=|E|} . Hence, the space of edge functions H ( E ) {\displaystyle {\mathcal {H}}(E)} can be identified as a m {\displaystyle m} -dimensional Hilbert space, i.e., H ( E ) R m {\displaystyle {\mathcal {H}}(E)\cong \mathbb {R} ^{m}} .

One special case of an edge function is the normalized edge weight function w : E ( 0 , 1 ] {\displaystyle w\colon E\rightarrow (0,1]} introduced above in the section on basic definitions. Similar to that function, any edge function F {\displaystyle F} can be trivially extended to V × V {\displaystyle V\times V} by setting F ( x i , x j ) := 0 {\displaystyle F(x_{i},x_{j}):=0} if ( x i , x j ) E {\displaystyle (x_{i},x_{j})\not \in E} . The space of those extended edge functions is still denoted by H ( E ) {\displaystyle {\mathcal {H}}(E)} and can be identified with R m {\displaystyle \mathbb {R} ^{m}} , where now m := | V | 2 {\displaystyle m:=|V|^{2}} .

The inner product of H ( E ) {\displaystyle {\mathcal {H}}(E)} is defined as:

F , G H ( E ) := ( x i , x j ) E F ( x i , x j ) G ( x i , x j ) , F , G H ( E ) . {\displaystyle \langle F,G\rangle _{{\mathcal {H}}(E)}:=\sum _{(x_{i},x_{j})\in E}F(x_{i},x_{j})G(x_{i},x_{j}),\quad \forall F,G\in {\mathcal {H}}(E).}

Additionally, for any edge function F H ( V ) {\displaystyle F\in {\mathcal {H}}(V)} the p {\displaystyle \ell _{p}} -norm and {\displaystyle \ell _{\infty }} -norm of F {\displaystyle F} are defined as:

F p = { ( ( x i , x j ) E | F ( x i , x j ) | p ) 1 p  for  1 p <   , max ( x i , x j ) E | F ( x i , x j ) | ,  for  p =   . {\displaystyle \|F\|_{p}={\begin{cases}\left(\sum _{(x_{i},x_{j})\in E}|F(x_{i},x_{j})|^{p}\right)^{\frac {1}{p}}&{\text{ for }}1\leqslant p<\infty \ ,\\\max _{(x_{i},x_{j})\in E}|F(x_{i},x_{j})|,&{\text{ for }}p=\infty \ .\end{cases}}}

The 2 {\displaystyle \ell _{2}} -norm is induced by the inner product.

If one extends the edge set E {\displaystyle E} in a way such that E = V × V {\displaystyle E=V\times V} than it becomes clear that H ( E ) R n × n {\displaystyle {\mathcal {H}}(E)\cong \mathbb {R} ^{n\times n}} because H ( V ) R n {\displaystyle {\mathcal {H}}(V)\cong \mathbb {R} ^{n}} . This means that each edge function can be identified with a linear matrix operator.

Differential graph operators

An important ingredient in the calculus on finite weighted graphs is the mimicking of standard differential operators from the continuum setting in the discrete setting of finite weighted graphs. This allows one to translate well-studied tools from mathematics, such as partial differential equations and variational methods, and make them usable in applications which can best be modeled by a graph. The fundamental concept which makes this translation possible is the graph gradient, a first-order difference operator on graphs. Based on this one can derive higher-order difference operators, e.g., the graph Laplacian.

First-order differential operators

Weighted differences

Let G = ( V , E , w ) {\displaystyle G=(V,E,w)} be a finite weighted graph and let f H ( V ) {\displaystyle f\in {\mathcal {H}}(V)} be a vertex function. Then the weighted difference (or weighted graph derivative) of f {\displaystyle f} along a directed edge ( x i , x j ) E {\displaystyle (x_{i},x_{j})\in E} is

x j f ( x i )   :=   w ( x i , x j ) ( f ( x j ) f ( x i ) ) . {\displaystyle \partial _{x_{j}}f(x_{i})\ :=\ {\sqrt {w(x_{i},x_{j})}}\left(f(x_{j})-f(x_{i})\right).}

For any weighted difference the following properties hold:

  • x i f ( x j ) = x j f ( x i ) , {\displaystyle \partial _{x_{i}}f(x_{j})=-\partial _{x_{j}}f(x_{i}),}
  • x i f ( x i ) = 0 , {\displaystyle \partial _{x_{i}}f(x_{i})=0,}
  • f ( x i ) = f ( x j ) x j f ( x i ) = 0. {\displaystyle f(x_{i})=f(x_{j})\Rightarrow \partial _{x_{j}}f(x_{i})=0.}

Weighted gradient

Based on the notion of weighted differences one defines the weighted gradient operator on graphs w : H ( V ) H ( E ) {\displaystyle \nabla _{w}:{\mathcal {H}}(V)\rightarrow {\mathcal {H}}(E)} as

( w f ) ( x i , x j )   =   x j f ( x i ) . {\displaystyle (\nabla _{w}f)(x_{i},x_{j})\ =\ \partial _{x_{j}}f(x_{i}).}

This is a linear operator.

To measure the local variation of a vertex function f {\displaystyle f} in a vertex x i V {\displaystyle x_{i}\in V} one can restrict the gradient w f {\displaystyle \nabla _{w}f} of f {\displaystyle f} to all directed edges starting in x i {\displaystyle x_{i}} and using the p {\displaystyle \ell _{p}} -norm of this edge function, i.e.,

( w f ) ( x i , ) p = { ( x j x i w ( x i , x j ) p 2 | f ( x j ) f ( x i ) | p ) 1 p  for  1 p < , max x j x i w ( x i , x j ) | f ( x j ) f ( x i ) |  for  p = . {\displaystyle \|(\nabla _{w}f)(x_{i},\cdot )\|_{\ell _{p}}={\begin{cases}\left(\sum _{x_{j}\sim x_{i}}w(x_{i},x_{j})^{\frac {p}{2}}|f(x_{j})-f(x_{i})|^{p}\right)^{\frac {1}{p}}&{\text{ for }}1\leq p<\infty ,\\\max _{x_{j}\sim x_{i}}{\sqrt {w(x_{i},x_{j})}}|f(x_{j})-f(x_{i})|&{\text{ for }}p=\infty .\end{cases}}}

Weighted divergence

The adjoint operator w : H ( E ) H ( V ) {\displaystyle \nabla _{w}^{*}\colon {\mathcal {H}}(E)\rightarrow {\mathcal {H}}(V)} of the weighted gradient operator is a linear operator defined by

w f , G H ( E ) = f , w G H ( V )  for all  f H ( V ) , G H ( E ) . {\displaystyle \langle \nabla _{w}f,G\rangle _{{\mathcal {H}}(E)}=\langle f,\nabla _{w}^{*}G\rangle _{{\mathcal {H}}(V)}\quad {\text{ for all }}f\in {\mathcal {H}}(V),G\in {\mathcal {H}}(E).}

For undirected graphs with a symmetric weight function w H ( E ) {\displaystyle w\in {\mathcal {H}}(E)} the adjoint operator w {\displaystyle \nabla _{w}^{*}} of a function F H ( E ) {\displaystyle F\in {\mathcal {H}}(E)} at a vertex x i V {\displaystyle x_{i}\in V} has the following form:

( w F ) ( x i )   =   1 2 x j x i w ( x i , x j ) ( F ( x j , x i ) F ( x i , x j ) ) . {\displaystyle \left(\nabla _{w}^{*}F\right)(x_{i})\ =\ {\frac {1}{2}}\sum _{x_{j}\sim x_{i}}{{\sqrt {w(x_{i},x_{j})}}(F(x_{j},x_{i})-F(x_{i},x_{j}))}.}

One can then define the weighted divergence operator on graphs via the adjoint operator as div w := w {\displaystyle \operatorname {div} _{w}:=-\nabla _{w}^{*}} . The divergence on a graph measures the net outflow of an edge function in each vertex of the graph.

Second-order differential operators

Graph Laplace operator

The weighted graph Laplacian Δ w : H ( V ) H ( V ) {\displaystyle \Delta _{w}:{\mathcal {H}}(V)\rightarrow {\mathcal {H}}(V)} is a well-studied operator in the graph setting. Mimicking the relationship div ( f ) = Δ f {\displaystyle \operatorname {div} (\nabla f)=\Delta f} of the Laplace operator in the continuum setting, the weighted graph Laplacian can be derived for any vertex x i V {\displaystyle x_{i}\in V} as:

( div w ( w f ) ) ( x i ) = 1 2 x j x i w ( x i , x j ) ( w f ( x j , x i ) w f ( x i , x j ) ) = 1 2 x j x i w ( x i , x j ) ( w ( x i , x j ) ( f ( x j ) f ( x i ) ) w ( x j , x i ) ( f ( x i ) f ( x j ) ) ) = 1 2 x j x i w ( x i , x j ) ( 2 f ( x j ) 2 f ( x i ) ) = x j x i w ( x i , x j ) ( f ( x j ) f ( x i ) )   =:   ( Δ w f ) ( x i ) . {\displaystyle {\begin{aligned}(\operatorname {div} _{w}(\nabla _{w}f))(x_{i})&={\frac {1}{2}}\sum _{x_{j}\sim x_{i}}{\sqrt {w(x_{i},x_{j})}}(\nabla _{w}f(x_{j},x_{i})-\nabla _{w}f(x_{i},x_{j}))\\&={\frac {1}{2}}\sum _{x_{j}\sim x_{i}}{\sqrt {w(x_{i},x_{j})}}\left({\sqrt {w(x_{i},x_{j})}}(f(x_{j})-f(x_{i}))-{\sqrt {w(x_{j},x_{i})}}(f(x_{i})-f(x_{j}))\right)\\&={\frac {1}{2}}\sum _{x_{j}\sim x_{i}}w(x_{i},x_{j})(2f(x_{j})-2f(x_{i}))\\&=\sum _{x_{j}\sim x_{i}}w(x_{i},x_{j})(f(x_{j})-f(x_{i}))\ =:\ (\Delta _{w}f)(x_{i}).\end{aligned}}}

Note that one has to assume that the graph G {\displaystyle G} is undirected and has a symmetric weight function w ( x i , x j ) = w ( x j , x i ) {\displaystyle w(x_{i},x_{j})=w(x_{j},x_{i})} for this representation.

Graph p-Laplace operators

The continuous p {\displaystyle p} -Laplace operator is a second-order differential operator that can be well-translated to finite weighted graphs. It allows the translation of various partial differential equations, e.g., the heat equation, to the graph setting.

Based on the first-order partial difference operators on graphs one can formally derive a family of weighted graph p {\displaystyle p} -Laplace operators Δ w , p : H ( V ) H ( V ) {\displaystyle \Delta _{w,p}\colon {\mathcal {H}}(V)\rightarrow {\mathcal {H}}(V)} for 1 p < {\displaystyle 1\leq p<\infty } by minimization of the discrete p {\displaystyle p} -Dirichlet energy functional

E ( f )   :=   1 p x i V w f ( x i , ) p p . {\displaystyle E(f)\ :=\ {\frac {1}{p}}\sum _{x_{i}\in V}\|\nabla _{w}f(x_{i},\cdot )\|_{\ell _{p}}^{p}.}

The necessary optimality conditions for a minimizer of the energy functional E {\displaystyle E} lead to the following definition of the graph p {\displaystyle p} -Laplacian:

( Δ w , p f ) ( x i )   :=   x j x i w ( x i , x j ) p 2 | f ( x j ) f ( x i ) | p 2 ( f ( x j ) f ( x i ) ) . {\displaystyle (\Delta _{w,p}f)(x_{i})\ :=\ \sum _{x_{j}\sim x_{i}}w(x_{i},x_{j})^{\frac {p}{2}}|f(x_{j})-f(x_{i})|^{p-2}(f(x_{j})-f(x_{i})).}

Note that the graph Laplace operator is a special case of the graph p {\displaystyle p} -Laplace operator for p = 2 {\displaystyle p=2} , i.e.,

( Δ w , 2 f ) ( x i )   =   ( Δ w f ) ( x i )   =   x j x i w ( x i , x j ) ( f ( x j ) f ( x i ) ) . {\displaystyle (\Delta _{w,2}f)(x_{i})\ =\ (\Delta _{w}f)(x_{i})\ =\ \sum _{x_{j}\sim x_{i}}w(x_{i},x_{j})(f(x_{j})-f(x_{i})).}

Applications

Calculus on finite weighted graphs is used in a wide range of applications from different fields such as image processing, machine learning, and network analysis. A non-exhaustive list of tasks in which finite weighted graphs have been employed is:

See also

Notes

1. Note that a slightly different definition of undirected graph is also in use, which considers an undirected edge to be a two-set (set with two distinct elements) { x i , x j } {\displaystyle \{x_{i},x_{j}\}} instead of a pair of ordered pairs ( x i , x j ) {\displaystyle (x_{i},x_{j})} and ( x j , x i ) {\displaystyle (x_{j},x_{i})} . Here the latter description is needed, as it is required to allow edge functions in H ( E ) {\displaystyle {\mathcal {H}}(E)} (see the section about the space of edge functions) to take different values on ( x i , x j ) {\displaystyle (x_{i},x_{j})} and ( x j , x i ) {\displaystyle (x_{j},x_{i})} .

References

  1. Luxburg, Ulrike von; Audibert, Jean-Yves; Hein, Matthias (2007). "Graph Laplacians and their Convergence on Random Neighborhood Graphs". Journal of Machine Learning Research. 8 (Jun): 1325–1368. ISSN 1533-7928.
  2. ^ Gilboa, Guy; Osher, Stanley (2009). "Nonlocal Operators with Applications to Image Processing". Multiscale Modeling & Simulation. 7 (3): 1005–1028. doi:10.1137/070698592. ISSN 1540-3459. S2CID 7153727.
  3. ^ Elmoataz, A.; Lezoray, O.; Bougleux, S. (2008). "Nonlocal Discrete Regularization on Weighted Graphs: A Framework for Image and Manifold Processing". IEEE Transactions on Image Processing. 17 (7): 1047–1060. Bibcode:2008ITIP...17.1047E. CiteSeerX 10.1.1.491.1516. doi:10.1109/TIP.2008.924284. ISSN 1057-7149. PMID 18586614. S2CID 9687337.
  4. Desquesnes, Xavier; Elmoataz, Abderrahim; Lézoray, Olivier (2013). "Eikonal Equation Adaptation on Weighted Graphs: Fast Geometric Diffusion Process for Local and Non-local Image and Data Processing" (PDF). Journal of Mathematical Imaging and Vision. 46 (2): 238–257. Bibcode:2013JMIV...46..238D. doi:10.1007/s10851-012-0380-9. ISSN 0924-9907. S2CID 254643702.
  5. Elmoataz, Abderrahim; Toutain, Matthieu; Tenbrinck, Daniel (2015). "On the $p$-Laplacian and $\infty$-Laplacian on Graphs with Applications in Image and Data Processing". SIAM Journal on Imaging Sciences. 8 (4): 2412–2451. doi:10.1137/15M1022793. ISSN 1936-4954. S2CID 40848152.
  6. Mahmood, Faisal; Shahid, Nauman; Skoglund, Ulf; Vandergheynst, Pierre (2018). "Adaptive Graph-Based Total Variation for Tomographic Reconstructions". IEEE Signal Processing Letters. 25 (5): 700–704. arXiv:1610.00893. Bibcode:2018ISPL...25..700M. doi:10.1109/LSP.2018.2816582. ISSN 1070-9908. S2CID 3833453.
  7. Peyré, Gabriel; Bougleux, Sébastien; Cohen, Laurent (2008). "Non-local Regularization of Inverse Problems". In Forsyth, David; Torr, Philip; Zisserman, Andrew (eds.). Computer Vision – ECCV 2008. Lecture Notes in Computer Science. Vol. 5304. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 57–68. doi:10.1007/978-3-540-88690-7_5. ISBN 9783540886891. S2CID 1044368.
  8. Bühler, Thomas; Hein, Matthias (2009). "Spectral clustering based on the graph p -Laplacian". Proceedings of the 26th Annual International Conference on Machine Learning. Montreal, Quebec, Canada: ACM Press. pp. 81–88. doi:10.1145/1553374.1553385. ISBN 9781605585161. S2CID 858868.
  9. Lozes, Francois; Elmoataz, Abderrahim; Lezoray, Olivier (2014). "Partial Difference Operators on Weighted Graphs for Image Processing on Surfaces and Point Clouds" (PDF). IEEE Transactions on Image Processing. 23 (9): 3896–3909. Bibcode:2014ITIP...23.3896L. doi:10.1109/TIP.2014.2336548. ISSN 1057-7149. PMID 25020095. S2CID 6838641.
Categories: