Misplaced Pages

Quadratic unconstrained binary optimization

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Combinatorial optimization problem

Quadratic unconstrained binary optimization (QUBO), also known as unconstrained binary quadratic programming (UBQP), is a combinatorial optimization problem with a wide range of applications from finance and economics to machine learning. QUBO is an NP hard problem, and for many classical problems from theoretical computer science, like maximum cut, graph coloring and the partition problem, embeddings into QUBO have been formulated. Embeddings for machine learning models include support-vector machines, clustering and probabilistic graphical models. Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for adiabatic quantum computation, where it is solved through a physical process called quantum annealing.

Definition

The set of binary vectors of a fixed length n > 0 {\displaystyle n>0} is denoted by B n {\displaystyle \mathbb {B} ^{n}} , where B = { 0 , 1 } {\displaystyle \mathbb {B} =\lbrace 0,1\rbrace } is the set of binary values (or bits). We are given a real-valued upper triangular matrix Q R n × n {\displaystyle Q\in \mathbb {R} ^{n\times n}} , whose entries Q i j {\displaystyle Q_{ij}} define a weight for each pair of indices i , j { 1 , , n } {\displaystyle i,j\in \lbrace 1,\dots ,n\rbrace } within the binary vector. We can define a function f Q : B n R {\displaystyle f_{Q}:\mathbb {B} ^{n}\rightarrow \mathbb {R} } that assigns a value to each binary vector through

f Q ( x ) = x Q x = i = 1 n j = i n Q i j x i x j {\displaystyle f_{Q}(x)=x^{\top }Qx=\sum _{i=1}^{n}\sum _{j=i}^{n}Q_{ij}x_{i}x_{j}}

Intuitively, the weight Q i j {\displaystyle Q_{ij}} is added if both x i {\displaystyle x_{i}} and x j {\displaystyle x_{j}} have value 1. When i = j {\displaystyle i=j} , the values Q i i {\displaystyle Q_{ii}} are added if x i = 1 {\displaystyle x_{i}=1} , as x i x i = x i {\displaystyle x_{i}x_{i}=x_{i}} for all x i B {\displaystyle x_{i}\in \mathbb {B} } .

The QUBO problem consists of finding a binary vector x {\displaystyle x^{*}} that is minimal with respect to f Q {\displaystyle f_{Q}} , namely

x B n :   f Q ( x ) f Q ( x ) {\displaystyle \forall x\in \mathbb {B} ^{n}:~f_{Q}(x^{*})\leq f_{Q}(x)}

In general, x {\displaystyle x^{*}} is not unique, meaning there may be a set of minimizing vectors with equal value w.r.t. f Q {\displaystyle f_{Q}} . The complexity of QUBO arises from the number of candidate binary vectors to be evaluated, as | B n | = 2 n {\displaystyle |\mathbb {B} ^{n}|=2^{n}} grows exponentially in n {\displaystyle n} .

Sometimes, QUBO is defined as the problem of maximizing f Q {\displaystyle f_{Q}} , which is equivalent to minimizing f Q = f Q {\displaystyle f_{-Q}=-f_{Q}} .

Properties

QUBO is scale invariant for positive factors α > 0 {\displaystyle \alpha >0} , which leave the optimum x {\displaystyle x^{*}} unchanged:

f α Q ( x ) = i j ( α Q i j ) x i x j = α i j Q i j x i x j = α f Q ( x ) {\displaystyle f_{\alpha Q}(x)=\sum _{i\leq j}(\alpha Q_{ij})x_{i}x_{j}=\alpha \sum _{i\leq j}Q_{ij}x_{i}x_{j}=\alpha f_{Q}(x)}

In its general form, QUBO is NP-hard and cannot be solved efficiently by any polynomial-time algorithm. However, there are polynomially-solvable special cases, where Q {\displaystyle Q} has certain properties, for example:

  • If all coefficients are positive, the optimum is trivially x = ( 0 , , 0 ) {\displaystyle x^{*}=(0,\dots ,0)} . Similarly, if all coefficients are negative, the optimum is x = ( 1 , , 1 ) {\displaystyle x^{*}=(1,\dots ,1)} .
  • If Q {\displaystyle Q} is diagonal, the bits can be optimized independently, and the problem is solvable in O ( n ) {\displaystyle {\mathcal {O}}(n)} . The optimal variable assignments are simply x i = 1 {\displaystyle x_{i}^{*}=1} if Q i i < 0 {\displaystyle Q_{ii}<0} , and x i = 0 {\displaystyle x_{i}^{*}=0} otherwise.
  • If all off-diagonal elements of Q {\displaystyle Q} are non-positive, the corresponding QUBO problem is solvable in polynomial time.

QUBO can be solved using integer linear programming solvers like CPLEX or Gurobi Optimizer. This is possible since QUBO can be reformulated as a linear constrained binary optimization problem. To achieve this, substitute the product x i x j {\displaystyle x_{i}x_{j}} by an additional binary variable z i j { 0 , 1 } {\displaystyle z_{ij}\in \{0,1\}} and add the constraints x i z i j {\displaystyle x_{i}\geq z_{ij}} , x j z i j {\displaystyle x_{j}\geq z_{ij}} and x i + x j 1 z i j {\displaystyle x_{i}+x_{j}-1\leq z_{ij}} . Note that z i j {\displaystyle z_{ij}} can also be relaxed to continuous variables within the bounds zero and one.

Applications

QUBO is a structurally simple, yet computationally hard optimization problem. It can be used to encode a wide range of optimization problems from various scientific areas.

Cluster Analysis

Binary Clustering with QUBO20 points with random cluster assignmentA bad cluster assignment.20 points with sensible cluster assignmentA good cluster assignment.Visual representation of a clustering problem with 20 points: Circles of the same color belong to the same cluster. Each circle can be understood as a binary variable in the corresponding QUBO problem.

As an illustrative example of how QUBO can be used to encode an optimization problem, we consider the problem of cluster analysis. Here, we are given a set of 20 points in 2D space, described by a matrix D R 20 × 2 {\displaystyle D\in \mathbb {R} ^{20\times 2}} , where each row contains two cartesian coordinates. We want to assign each point to one of two classes or clusters, such that points in the same cluster are similar to each other. For two clusters, we can assign a binary variable x i B {\displaystyle x_{i}\in \mathbb {B} } to the point corresponding to the i {\displaystyle i} -th row in D {\displaystyle D} , indicating whether it belongs to the first ( x i = 0 {\displaystyle x_{i}=0} ) or second cluster ( x i = 1 {\displaystyle x_{i}=1} ). Consequently, we have 20 binary variables, which form a binary vector x B 20 {\displaystyle x\in \mathbb {B} ^{20}} that corresponds to a cluster assignment of all points (see figure).

One way to derive a clustering is to consider the pairwise distances between points. Given a cluster assignment x {\displaystyle x} , one of x i x j {\displaystyle x_{i}x_{j}} or ( 1 x i ) ( 1 x j ) {\displaystyle (1-x_{i})(1-x_{j})} evaluates to 1 if points i {\displaystyle i} and j {\displaystyle j} are in the same cluster. Similarly, one of x i ( 1 x j ) {\displaystyle x_{i}(1-x_{j})} or ( 1 x i ) x j {\displaystyle (1-x_{i})x_{j}} indicates that they are in different clusters. Let d i j 0 {\displaystyle d_{ij}\geq 0} denote the Euclidean distance between points i {\displaystyle i} and j {\displaystyle j} . In order to define a cost function to minimize, when points i {\displaystyle i} and j {\displaystyle j} are in the same cluster we add their positive distance d i j {\displaystyle d_{ij}} , and subtract it when they are in different clusters. This way, an optimal solution tends to place points which are far apart into different clusters, and points that are close into the same cluster. The cost function thus comes down to

f ( x ) = i < j d i j ( x i x j + ( 1 x i ) ( 1 x j ) ) d i j ( x i ( 1 x j ) + ( 1 x i ) x j ) = i < j [ 4 d i j x i x j 2 d i j x i 2 d i j x j + d i j ] {\displaystyle {\begin{aligned}f(x)&=\sum _{i<j}d_{ij}(x_{i}x_{j}+(1-x_{i})(1-x_{j}))-d_{ij}(x_{i}(1-x_{j})+(1-x_{i})x_{j})\\&=\sum _{i<j}\left\end{aligned}}}

From the second line, the QUBO parameters can be easily found by re-arranging to be:

Q i j = { d i j if  i j ( k = 1 i 1 d k i + = i + 1 n d i ) if  i = j {\displaystyle {\begin{aligned}Q_{ij}&={\begin{cases}d_{ij}&{\text{if }}i\neq j\\-\left(\sum \limits _{k=1}^{i-1}d_{ki}+\sum \limits _{\ell =i+1}^{n}d_{i\ell }\right)&{\text{if }}i=j\end{cases}}\end{aligned}}}

Using these parameters, the optimal QUBO solution will correspond to an optimal cluster w.r.t. above cost function.

Connection to Ising models

QUBO is very closely related and computationally equivalent to the Ising model, whose Hamiltonian function is defined as

H ( σ ) = i   j J i j σ i σ j μ j h j σ j {\displaystyle H(\sigma )=-\sum _{\langle i~j\rangle }J_{ij}\sigma _{i}\sigma _{j}-\mu \sum _{j}h_{j}\sigma _{j}}

with real-valued parameters h j , J i j , μ {\displaystyle h_{j},J_{ij},\mu } for all i , j {\displaystyle i,j} . The spin variables σ j {\displaystyle \sigma _{j}} are binary with values from { 1 , + 1 } {\displaystyle \lbrace -1,+1\rbrace } instead of B {\displaystyle \mathbb {B} } . Moreover, in the Ising model the variables are typically arranged in a lattice where only neighboring pairs of variables i   j {\displaystyle \langle i~j\rangle } can have non-zero coefficients. Applying the identity σ 2 x 1 {\displaystyle \sigma \mapsto 2x-1} yields an equivalent QUBO problem:

f ( x ) = i   j J i j ( 2 x i 1 ) ( 2 x j 1 ) + j μ h j ( 2 x j 1 ) = i   j ( 4 J i j x i x j + 2 J i j x i + 2 J i j x j J i j ) + j ( 2 μ h j x j μ h j ) using  x j = x j x j = i   j ( 4 J i j x i x j ) + i   j 2 J i j x i + i   j 2 J i j x j + j 2 μ h j x j i   j J i j j μ h j = i   j ( 4 J i j x i x j ) + j   i 2 J j i x j + i   j 2 J i j x j + j 2 μ h j x j i   j J i j j μ h j using  i   j = j   i = i   j ( 4 J i j x i x j ) + j k = j   i 2 J k i x j + j i   k = j 2 J i k x j + j 2 μ h j x j i   j J i j j μ h j = i   j ( 4 J i j x i x j ) + j ( i   k = j ( 2 J k i + 2 J i k ) + 2 μ h j ) x j i   j J i j j μ h j using  k = j   i = i   k = j = i = 1 n j = 1 i Q i j x i x j + C {\displaystyle {\begin{aligned}f(x)&=\sum _{\langle i~j\rangle }-J_{ij}(2x_{i}-1)(2x_{j}-1)+\sum _{j}\mu h_{j}(2x_{j}-1)\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j}+2J_{ij}x_{i}+2J_{ij}x_{j}-J_{ij})+\sum _{j}(2\mu h_{j}x_{j}-\mu h_{j})&&{\text{using }}x_{j}=x_{j}x_{j}\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{\langle i~j\rangle }2J_{ij}x_{i}+\sum _{\langle i~j\rangle }2J_{ij}x_{j}+\sum _{j}2\mu h_{j}x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{\langle j~i\rangle }2J_{ji}x_{j}+\sum _{\langle i~j\rangle }2J_{ij}x_{j}+\sum _{j}2\mu h_{j}x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}&&{\text{using }}\sum _{\langle i~j\rangle }=\sum _{\langle j~i\rangle }\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{j}\sum _{\langle k=j~i\rangle }2J_{ki}x_{j}+\sum _{j}\sum _{\langle i~k=j\rangle }2J_{ik}x_{j}+\sum _{j}2\mu h_{j}x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\\&=\sum _{\langle i~j\rangle }(-4J_{ij}x_{i}x_{j})+\sum _{j}\left(\sum _{\langle i~k=j\rangle }(2J_{ki}+2J_{ik})+2\mu h_{j}\right)x_{j}-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}&&{\text{using }}\sum _{\langle k=j~i\rangle }=\sum _{\langle i~k=j\rangle }\\&=\sum _{i=1}^{n}\sum _{j=1}^{i}Q_{ij}x_{i}x_{j}+C\end{aligned}}}

where

Q i j = { 4 J i j if  i j i   k = j ( 2 J k i + 2 J i k ) + 2 μ h j if  i = j C = i   j J i j j μ h j {\displaystyle {\begin{aligned}Q_{ij}&={\begin{cases}-4J_{ij}&{\text{if }}i\neq j\\\sum _{\langle i~k=j\rangle }(2J_{ki}+2J_{ik})+2\mu h_{j}&{\text{if }}i=j\end{cases}}\\C&=-\sum _{\langle i~j\rangle }J_{ij}-\sum _{j}\mu h_{j}\end{aligned}}}

and using the fact that for a binary variable x j = x j x j {\displaystyle x_{j}=x_{j}x_{j}} .

As the constant C {\displaystyle C} does not change the position of the optimum x {\displaystyle x^{*}} , it can be neglected during optimization and is only important for recovering the original Hamiltonian function value.

References

  1. Kochenberger, Gary; Hao, Jin-Kao; Glover, Fred; Lewis, Mark; Lu, Zhipeng; Wang, Haibo; Wang, Yang (2014). "The unconstrained binary quadratic programming problem: a survey" (PDF). Journal of Combinatorial Optimization. 28: 58–81. doi:10.1007/s10878-014-9734-0. S2CID 16808394.
  2. ^ Glover, Fred; Kochenberger, Gary (2019). "A Tutorial on Formulating and Using QUBO Models". arXiv:1811.11538 .
  3. Lucas, Andrew (2014). "Ising formulations of many NP problems". Frontiers in Physics. 2: 5. arXiv:1302.5843. Bibcode:2014FrP.....2....5L. doi:10.3389/fphy.2014.00005.
  4. Mücke, Sascha; Piatkowski, Nico; Morik, Katharina (2019). "Learning Bit by Bit: Extracting the Essence of Machine Learning" (PDF). LWDA. S2CID 202760166. Archived from the original (PDF) on 2020-02-27.
  5. Tom Simonite (8 May 2013). "D-Wave's Quantum Computer Goes to the Races, Wins". MIT Technology Review. Archived from the original on 24 September 2015. Retrieved 12 May 2013.
  6. A. P. Punnen (editor), Quadratic unconstrained binary optimization problem: Theory, Algorithms, and Applications, Springer, Springer, 2022.
  7. Çela, E., Punnen, A.P. (2022). Complexity and Polynomially Solvable Special Cases of QUBO. In: Punnen, A.P. (eds) The Quadratic Unconstrained Binary Optimization Problem. Springer, Cham. https://doi.org/10.1007/978-3-031-04520-2_3
  8. See Theorem 3.16 in Punnen (2022); note that the authors assume the maximization version of QUBO.
  9. Ratke, Daniel (2021-06-10). "List of QUBO formulations". Retrieved 2022-12-16.

External links

  • QUBO Benchmark (Benchmark of software packages for the exact solution of QUBOs; part of the well-known Mittelmann benchmark collection)


Stub icon

This artificial intelligence-related article is a stub. You can help Misplaced Pages by expanding it.

Categories: