Misplaced Pages

MAXEkSAT

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (August 2021) (Learn how and when to remove this message)

MAXEkSAT is a problem in computational complexity theory that is a maximization version of the Boolean satisfiability problem 3SAT. In MAXEkSAT, each clause has exactly k literals, each with distinct variables, and is in conjunctive normal form. These are called k-CNF formulas. The problem is to determine the maximum number of clauses that can be satisfied by a truth assignment to the variables in the clauses.

We say that an algorithm A provides an α-approximation to MAXEkSAT if, for some fixed positive α less than or equal to 1, and every kCNF formula φ, A can find a truth assignment to the variables of φ that will satisfy at least an α-fraction of the maximum number of satisfiable clauses of φ.

Because the NP-hard k-SAT problem (for k ≥ 3) is equivalent to determining if the corresponding MAXEkSAT instance has a value equal to the number of clauses, MAXEkSAT must also be NP-hard, meaning that there is no polynomial time algorithm unless P=NP. A natural next question, then, is that of finding approximate solutions: what's the largest real number α < 1 such that some explicit P (complexity) algorithm always finds a solution of size α·OPT, where OPT is the (potentially hard to find) maximizing assignment. While the algorithm is efficient, it's not obvious how to remove its dependence on randomness. There are problems related to the satisfiability of conjunctive normal form Boolean formulas.

Approximation Algorithm

There is a simple randomized polynomial-time algorithm that provides a ( 1 1 2 k ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}\right)} -approximation to MAXEkSAT: independently set each variable to true with probability ⁠1/2⁠, otherwise set it to false.

Any given clause c is unsatisfied only if all of its k constituent literals evaluates to false. Because each literal within a clause has a 1⁄2 chance of evaluating to true independently of any of the truth value of any of the other literals, the probability that they are all false is ( 1 2 ) k = 1 2 k {\displaystyle \textstyle ({\frac {1}{2}})^{k}={\frac {1}{2^{k}}}} . Thus, the probability that c is indeed satisfied is 1 1 2 k {\displaystyle \textstyle 1-{\frac {1}{2^{k}}}} , so the indicator variable 1 c {\displaystyle \textstyle 1_{c}} (that is 1 if c is true and 0 otherwise) has expectation 1 1 2 k {\displaystyle \textstyle 1-{\frac {1}{2^{k}}}} . The sum of all of the indicator variables over all | C | {\displaystyle \textstyle |C|} clauses is ( 1 1 2 k ) | C | {\displaystyle (\textstyle 1-{\frac {1}{2^{k}}})|C|} , so by linearity of expectation we satisfy a ( 1 1 2 k ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}\right)} fraction of the clauses in expectation. Because the optimal solution can't satisfy more than all | C | {\displaystyle \textstyle |C|} of the clauses, we have that ALG = ( 1 1 2 k ) | C | > ( 1 1 2 k ) OPT {\displaystyle {\textit {ALG}}=\left(1-{\frac {1}{2^{k}}}\right)\cdot |C|>\left(1-{\frac {1}{2^{k}}}\right)\cdot {\textit {OPT}}} , so the algorithm finds a ( 1 1 2 k ) {\displaystyle \textstyle \geq \left(1-{\frac {1}{2^{k}}}\right)} approximation to the true optimal solution in expectation.

Despite its high expectation, this algorithm may occasionally stumble upon solutions of value lower than the expectation we computed above. However, over a large number of trials, the average fraction of satisfied clauses will tend towards ( 1 1 2 k ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}\right)} . This implies two things:

  1. There must exist an assignment satisfying at least a ( 1 1 2 k ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}\right)} fraction of the clauses. If there weren't, we could never attain a value this large on average over a large number of trials.
  2. If we run the algorithm a large number of times, at least half of the trials (in expectation) will satisfy some ( 1 2 2 k ) {\displaystyle \textstyle (1-{\frac {2}{2^{k}}})} fraction of the clauses. This is because any smaller fraction would bring down the average enough that the algorithm must occasionally satisfy more than 100% of the clauses to get back to its expectation of ( 1 2 2 k ) {\displaystyle \textstyle \left(1-{\frac {2}{2^{k}}}\right)} , which cannot happen. Extending this using Markov's inequality, at least some 1 ( 1 1 + 2 k ϵ ) {\displaystyle \textstyle 1-\left({\frac {1}{1+2^{k}\epsilon }}\right)} -fraction of the trials (in expectation) will satisfy at least an ( 1 1 2 k ϵ ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}-\epsilon \right)} -fraction of the clauses. Therefore, for any positive ϵ {\displaystyle \textstyle \epsilon } , it takes only a polynomial number of random trials until we expect to find an assignment satisfying at least an ( 1 1 2 k ϵ ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}-\epsilon \right)} fraction of the clauses.

A more robust analysis (such as that in ) shows that we will, in fact, satisfy at least a ( 1 1 2 k ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}\right)} -fraction of the clauses a constant fraction of the time (depending only on k), with no loss of ϵ {\displaystyle \textstyle \epsilon } .

Derandomization

While the above algorithm is efficient, it's not obvious how to remove its dependence on randomness. Trying out all possible random assignments is equivalent to the naive brute force approach, so may take exponential time. One clever way to derandomize the above in polynomial time relies on work in error correcting codes, satisfying a ( 1 1 2 k ) {\displaystyle \textstyle \left(1-{\frac {1}{2^{k}}}\right)} fraction of the clauses in time polynomial in the input size (although the exponent depends on k).

We need one definition and two facts to find the algorithm.

Definition

S { 0 , 1 } n {\displaystyle S\subseteq \{0,1\}^{n}} is an ℓ-wise independent source if, for a uniformly chosen random (x1, x2, ..., xn) ∈ S, x1, x2, ..., xn are ℓ-wise independent random variables.

Fact 1

Note that such an assignment can be found among elements of any ℓ-wise independent source over n binary variables. This is easier to see once you realize that an ℓ-wise independent source is really just any set of binary vectors over {0, 1} with the property that all restrictions of those vectors to ℓ co-ordinates must present the 2 possible binary combinations an equal number of times.

Fact 2

Recall that BCH2,m,d is an [ n = 2 m , n 1 d 2 / 2 m , d ] 2 {\displaystyle _{2}} linear code.

There exists an ℓ-wise independent source of size O ( n / 2 ) {\displaystyle O(n^{\lfloor \ell /2\rfloor })} , namely the dual of a BCH2,log n,+1 code, which is a linear code. Since every BCH code can be presented as a polynomial-time computable restriction of a related Reed Solomon code, which itself is strongly explicit, there is a polynomial-time algorithm for finding such an assignment to the xi's. The proof of fact 2 can be found at Dual of BCH is an independent source.

Outline of the Algorithm

The algorithm works by generating BCH2,log n,+1, computing its dual (which as a set is an ℓ-wise independent source) and treating each element (codeword) of that source as a truth assignment to the n variables in φ. At least one of them will satisfy at least 1 − 2 of the clauses of φ, whenever φ is in kCNF form, k = .

Related problems

There are many problems related to the satisfiability of conjunctive normal form Boolean formulas.

  • Decision problems:
  • Optimization problems, where the goal is to maximize the number of clauses satisfied:
    • MAX-SAT, and the corresponded weighted version Weighted MAX-SAT
    • MAX-kSAT, where each clause has exactly k variables:
    • The partial maximum satisfiability problem (PMAX-SAT) asks for the maximum number of clauses which can be satisfied by any assignment of a given subset of clauses. The rest of the clauses must be satisfied.
    • The soft satisfiability problem (soft-SAT), given a set of SAT problems, asks for the maximum number of sets which can be satisfied by any assignment.
    • The minimum satisfiability problem.
  • The MAX-SAT problem can be extended to the case where the variables of the constraint satisfaction problem belong the set of reals. The problem amounts to finding the smallest q such that the q-relaxed intersection of the constraints is not empty.

See also

References

  1. "Max-SAT" (PDF). Archived from the original (PDF) on 2015-09-23. Retrieved 2014-09-01.
  2. Josep Argelich and Felip Manyà. Exact Max-SAT solvers for over-constrained problems. In Journal of Heuristics 12(4) pp. 375-392. Springer, 2006.
  3. Jaulin, L.; Walter, E. (2002). "Guaranteed robust nonlinear minimax estimation" (PDF). IEEE Transactions on Automatic Control. 47 (11): 1857–1864. doi:10.1109/TAC.2002.804479.

External links

Category: