Misplaced Pages

MRF optimization via dual decomposition

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "MRF optimization via dual decomposition" – news · newspapers · books · scholar · JSTOR (October 2020) (Learn how and when to remove this message)
This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (October 2020)
(Learn how and when to remove this message)

In dual decomposition a problem is broken into smaller subproblems and a solution to the relaxed problem is found. This method can be employed for MRF optimization. Dual decomposition is applied to markov logic programs as an inference technique.

Background

Discrete MRF Optimization (inference) is very important in Machine Learning and Computer vision, which is realized on CUDA graphical processing units. Consider a graph G = ( V , E ) {\displaystyle G=(V,E)} with nodes V {\displaystyle V} and Edges E {\displaystyle E} . The goal is to assign a label l p {\displaystyle l_{p}} to each p V {\displaystyle p\in V} so that the MRF Energy is minimized:

(1) min Σ p V θ p ( l p ) + Σ p q ε θ p q ( l p ) ( l q ) {\displaystyle \min \Sigma _{p\in V}\theta _{p}(l_{p})+\Sigma _{pq\in \varepsilon }\theta _{pq}(l_{p})(l_{q})}

Major MRF Optimization methods are based on Graph cuts or Message passing. They rely on the following integer linear programming formulation

(2) min x E ( θ , x ) = θ . x = p V θ p . x p + p q ε θ p q . x p q {\displaystyle \min _{x}E(\theta ,x)=\theta .x=\sum _{p\in V}\theta _{p}.x_{p}+\sum _{pq\in \varepsilon }\theta _{pq}.x_{pq}}

In many applications, the MRF-variables are {0,1}-variables that satisfy: x p ( l ) = 1 {\displaystyle x_{p}(l)=1} {\displaystyle \Leftrightarrow } label l {\displaystyle l} is assigned to p {\displaystyle p} , while x p q ( l , l ) = 1 {\displaystyle x_{pq}(l,l^{\prime })=1} , labels l , l {\displaystyle l,l^{\prime }} are assigned to p , q {\displaystyle p,q} .

Dual Decomposition

The main idea behind decomposition is surprisingly simple:

  1. decompose your original complex problem into smaller solvable subproblems,
  2. extract a solution by cleverly combining the solutions from these subproblems.

A sample problem to decompose:

min x Σ i f i ( x ) {\displaystyle \min _{x}\Sigma _{i}f^{i}(x)} where x C {\displaystyle x\in C}

In this problem, separately minimizing every single f i ( x ) {\displaystyle f^{i}(x)} over x {\displaystyle x} is easy; but minimizing their sum is a complex problem. So the problem needs to get decomposed using auxiliary variables { x i } {\displaystyle \{x^{i}\}} and the problem will be as follows:

min { x i } , x Σ i f i ( x i ) {\displaystyle \min _{\{x^{i}\},x}\Sigma _{i}f^{i}(x^{i})} where x i C , x i = x {\displaystyle x^{i}\in C,x^{i}=x}

Now we can relax the constraints by multipliers { λ i } {\displaystyle \{\lambda ^{i}\}} which gives us the following Lagrangian dual function:

g ( { λ i } ) = min { x i C } , x Σ i f i ( x i ) + Σ i λ i . ( x i x ) = min { x i C } , x Σ i [ f i ( x i ) + λ i . x i ] ( Σ i λ i ) x {\displaystyle g(\{\lambda ^{i}\})=\min _{\{x^{i}\in C\},x}\Sigma _{i}f^{i}(x^{i})+\Sigma _{i}\lambda ^{i}.(x^{i}-x)=\min _{\{x^{i}\in C\},x}\Sigma _{i}-(\Sigma _{i}\lambda ^{i})x}

Now we eliminate x {\displaystyle x} from the dual function by minimizing over x {\displaystyle x} and dual function becomes:

g ( { λ i } ) = min { x i C } Σ i [ f i ( x i ) + λ i . x i ] {\displaystyle g(\{\lambda ^{i}\})=\min _{\{x^{i}\in C\}}\Sigma _{i}}

We can set up a Lagrangian dual problem:

(3) max { λ i } Λ g ( λ i ) = Σ i g i ( x i ) , {\displaystyle \max _{\{\lambda ^{i}\}\in \Lambda }g({\lambda ^{i}})=\Sigma _{i}g^{i}(x^{i}),} The Master problem

(4) g i ( x i ) = m i n x i f i ( x i ) + λ i . x i {\displaystyle g^{i}(x^{i})=min_{x^{i}}f^{i}(x^{i})+\lambda ^{i}.x^{i}} where x i C {\displaystyle x^{i}\in C} The Slave problems

MRF optimization via Dual Decomposition

The original MRF optimization problem is NP-hard and we need to transform it into something easier.

τ {\displaystyle \tau } is a set of sub-trees of graph G {\displaystyle G} where its trees cover all nodes and edges of the main graph. And MRFs defined for every tree T {\displaystyle T} in τ {\displaystyle \tau } will be smaller. The vector of MRF parameters is θ T {\displaystyle \theta ^{T}} and the vector of MRF variables is x T {\displaystyle x^{T}} , these two are just smaller in comparison with original MRF vectors θ , x {\displaystyle \theta ,x} . For all vectors θ T {\displaystyle \theta ^{T}} we'll have the following:

(5) T τ ( p ) θ p T = θ p , T τ ( p q ) θ p q T = θ p q . {\displaystyle \sum _{T\in \tau (p)}\theta _{p}^{T}=\theta _{p},\sum _{T\in \tau (pq)}\theta _{pq}^{T}=\theta _{pq}.}

Where τ ( p ) {\displaystyle \tau (p)} and τ ( p q ) {\displaystyle \tau (pq)} denote all trees of τ {\displaystyle \tau } than contain node p {\displaystyle p} and edge p q {\displaystyle pq} respectively. We simply can write:

(6) E ( θ , x ) = T τ E ( θ T , x T ) {\displaystyle E(\theta ,x)=\sum _{T\in \tau }E(\theta ^{T},x^{T})}

And our constraints will be:

(7) x T χ T , x T = x | T , T τ {\displaystyle x^{T}\in \chi ^{T},x^{T}=x_{|T},\forall T\in \tau }

Our original MRF problem will become:

(8) min { x T } , x Σ T τ E ( θ T , x T ) {\displaystyle \min _{\{x^{T}\},x}\Sigma _{T\in \tau }E(\theta ^{T},x^{T})} where x T χ T , T τ {\displaystyle x^{T}\in \chi ^{T},\forall T\in \tau } and x T x | T , T τ {\displaystyle x^{T}\in x_{|T},\forall T\in \tau }

And we'll have the dual problem we were seeking:

(9) max { λ T } Λ g ( { λ T } ) = T τ g T ( λ T ) , {\displaystyle \max _{\{\lambda ^{T}\}\in \Lambda }g(\{\lambda ^{T}\})=\sum _{T\in \tau }g^{T}(\lambda ^{T}),} The Master problem

where each function g T ( . ) {\displaystyle g^{T}(.)} is defined as:

(10) g T ( λ T ) = min x T E ( θ T + λ T , x T ) {\displaystyle g^{T}(\lambda ^{T})=\min _{x^{T}}E(\theta ^{T}+\lambda ^{T},x^{T})} where x T χ T {\displaystyle x^{T}\in \chi ^{T}} The Slave problems

Theoretical Properties

Theorem 1. Lagrangian relaxation (9) is equivalent to the LP relaxation of (2).

min { x T } , x { E ( x , θ ) | x p T = s p , x T CONVEXHULL ( χ T ) } {\displaystyle \min _{\{x^{T}\},x}\{E(x,\theta )|x_{p}^{T}=s_{p},x^{T}\in {\text{CONVEXHULL}}(\chi ^{T})\}}

Theorem 2. If the sequence of multipliers { α t } {\displaystyle \{\alpha _{t}\}} satisfies α t 0 , lim t α t = 0 , t = 0 α t = {\displaystyle \alpha _{t}\geq 0,\lim _{t\to \infty }\alpha _{t}=0,\sum _{t=0}^{\infty }\alpha _{t}=\infty } then the algorithm converges to the optimal solution of (9).

Theorem 3. The distance of the current solution { θ T } {\displaystyle \{\theta ^{T}\}} to the optimal solution { θ ¯ T } {\displaystyle \{{\bar {\theta }}^{T}\}} , which decreases at every iteration.

Theorem 4. Any solution obtained by the method satisfies the WTA (weak tree agreement) condition.

Theorem 5. For binary MRFs with sub-modular energies, the method computes a globally optimal solution.

References

  1. "MRF Optimization via Dual Decomposition" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  2. Feng Niu and Ce Zhang and Christopher Re and Jude Shavlik (2012). Scaling Inference for Markov Logic via Dual Decomposition. 2012 IEEE 12th International Conference on Data Mining. IEEE. CiteSeerX 10.1.1.244.8755. doi:10.1109/icdm.2012.96.
  3. Shervin Rahimzadeh Arashloo and Josef Kittler (2013). Efficient processing of MRFs for unconstrained-pose face recognition. 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS). IEEE. doi:10.1109/btas.2013.6712721.
Category: