Jump to content

Maximum satisfiability problem

From Wikipedia, the free encyclopedia
(Redirected from MAXSAT)

In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.

Example

[edit]

The conjunctive normal form formula

is not satisfiable: no matter which truth values are assigned to its two variables, at least one of its four clauses will be false. However, it is possible to assign truth values in such a way as to make three out of four clauses true; indeed, every truth assignment will do this. Therefore, if this formula is given as an instance of the MAX-SAT problem, the solution to the problem is the number three.

Hardness

[edit]

The MAX-SAT problem is OptP-complete,[1] and thus NP-hard, since its solution easily leads to the solution of the boolean satisfiability problem, which is NP-complete.

It is also difficult to find an approximate solution of the problem, that satisfies a number of clauses within a guaranteed approximation ratio of the optimal solution. More precisely, the problem is APX-complete, and thus does not admit a polynomial-time approximation scheme unless P = NP.[2][3][4]

Weighted MAX-SAT

[edit]

More generally, one can define a weighted version of MAX-SAT as follows: given a conjunctive normal form formula with non-negative weights assigned to each clause, find truth values for its variables that maximize the combined weight of the satisfied clauses. The MAX-SAT problem is an instance of Weighted MAX-SAT where all weights are 1.[5][6][7]

Approximation algorithms

[edit]

1/2-approximation

[edit]

Randomly assigning each variable to be true with probability 1/2 gives an expected 2-approximation. More precisely, if each clause has at least k variables, then this yields a (1 − 2k)-approximation.[8] This algorithm can be derandomized using the method of conditional probabilities.[9]

(1-1/e)-approximation

[edit]

MAX-SAT can also be expressed using an integer linear program (ILP). Fix a conjunctive normal form formula F with variables x1, x2, ..., xn, and let C denote the clauses of F. For each clause c in C, let S+c and Sc denote the sets of variables which are not negated in c, and those that are negated in c, respectively. The variables yx of the ILP will correspond to the variables of the formula F, whereas the variables zc will correspond to the clauses. The ILP is as follows:

maximize (maximize the weight of the satisfied clauses)
subject to for all (clause is true iff it has a true, non-negated variable or a false, negated one)
for all . (every clause is either satisfied or not)
for all . (every variable is either true or false)

The above program can be relaxed to the following linear program L:

maximize (maximize the weight of the satisfied clauses)
subject to for all (clause is true iff it has a true, non-negated variable or a false, negated one)
for all .
for all .

The following algorithm using that relaxation is an expected (1-1/e)-approximation:[10]

  1. Solve the linear program L and obtain a solution O
  2. Set variable x to be true with probability yx where yx is the value given in O.

This algorithm can also be derandomized using the method of conditional probabilities.

3/4-approximation

[edit]

The 1/2-approximation algorithm does better when clauses are large whereas the (1-1/e)-approximation does better when clauses are small. They can be combined as follows:

  1. Run the (derandomized) 1/2-approximation algorithm to get a truth assignment X.
  2. Run the (derandomized) (1-1/e)-approximation to get a truth assignment Y.
  3. Output whichever of X or Y maximizes the weight of the satisfied clauses.

This is a deterministic factor (3/4)-approximation.[11]

Example
[edit]

On the formula

where , the (1-1/e)-approximation will set each variable to True with probability 1/2, and so will behave identically to the 1/2-approximation. Assuming that the assignment of x is chosen first during derandomization, the derandomized algorithms will pick a solution with total weight , whereas the optimal solution has weight .[12]

State of the art

[edit]

The state-of-the-art algorithm is due to Avidor, Berkovitch and Zwick,[13][14] and its approximation ratio is 0.7968. They also give another algorithm whose approximation ratio is conjectured to be 0.8353.

Solvers

[edit]

Many exact solvers for MAX-SAT have been developed during recent years, and many of them were presented in the well-known conference on the boolean satisfiability problem and related problems, the SAT Conference. In 2006 the SAT Conference hosted the first MAX-SAT evaluation comparing performance of practical solvers for MAX-SAT, as it has done in the past for the pseudo-boolean satisfiability problem and the quantified boolean formula problem. Because of its NP-hardness, large-size MAX-SAT instances cannot in general be solved exactly, and one must often resort to approximation algorithms and heuristics[15]

There are several solvers submitted to the last Max-SAT Evaluations:

  • Branch and Bound based: Clone, MaxSatz (based on Satz), IncMaxSatz, IUT_MaxSatz, WBO, GIDSHSat.
  • Satisfiability based: SAT4J, QMaxSat.
  • Unsatisfiability based: msuncore, WPM1, PM2.

Special cases

[edit]

MAX-SAT is one of the optimization extensions of the boolean satisfiability problem, which is the problem of determining whether the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE. If the clauses are restricted to have at most 2 literals, as in 2-satisfiability, we get the MAX-2SAT problem. If they are restricted to at most 3 literals per clause, as in 3-satisfiability, we get the MAX-3SAT problem.

[edit]

There are many problems related to the satisfiability of conjunctive normal form Boolean formulas.

  • Decision problems:
  • Optimization problems, where the goal is to maximize the number of clauses satisfied:
    • MAX-SAT, and the corresponded weighted version Weighted MAX-SAT
    • MAX-kSAT, where each clause has exactly k variables:
    • The partial maximum satisfiability problem (PMAX-SAT) asks for the maximum number of clauses which can be satisfied by any assignment of a given subset of clauses. The rest of the clauses must be satisfied.
    • The soft satisfiability problem (soft-SAT), given a set of SAT problems, asks for the maximum number of those problems which can be satisfied by any assignment.[16]
    • The minimum satisfiability problem.
  • The MAX-SAT problem can be extended to the case where the variables of the constraint satisfaction problem belong to the set of reals. The problem amounts to finding the smallest q such that the q-relaxed intersection of the constraints is not empty.[17]

See also

[edit]
[edit]

References

[edit]
  1. ^ M. Krentel (1988). "The complexity of optimization problems". Journal of Computer and System Sciences. 36 (3): 490–509. doi:10.1016/0022-0000(88)90039-6. hdl:1813/6559.
  2. ^ Mark Krentel. The Complexity of Optimization Problems. Proc. of STOC '86. 1986.
  3. ^ Christos Papadimitriou. Computational Complexity. Addison-Wesley, 1994.
  4. ^ Cohen, Cooper, Jeavons. A complete characterization of complexity for boolean constraint optimization problems. CP 2004.
  5. ^ Vazirani 2001, p. 131.
  6. ^ Borchers, Brian; Furman, Judith (1998-12-01). "A Two-Phase Exact Algorithm for MAX-SAT and Weighted MAX-SAT Problems". Journal of Combinatorial Optimization. 2 (4): 299–306. doi:10.1023/A:1009725216438. ISSN 1382-6905. S2CID 6736614.
  7. ^ Du, Dingzhu; Gu, Jun; Pardalos, Panos M. (1997-01-01). Satisfiability Problem: Theory and Applications : DIMACS Workshop, March 11-13, 1996. American Mathematical Soc. p. 393. ISBN 9780821870808.
  8. ^ Vazirani 2001, Lemma 16.2.
  9. ^ Vazirani 2001, Section 16.2.
  10. ^ Vazirani 2001, p. 136.
  11. ^ Vazirani 2001, Theorem 16.9.
  12. ^ Vazirani 2001, Example 16.11.
  13. ^ Avidor, Adi; Berkovitch, Ido; Zwick, Uri (2006). "Improved Approximation Algorithms for MAX NAE-SAT and MAX SAT". Approximation and Online Algorithms. Vol. 3879. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 27–40. doi:10.1007/11671411_3. ISBN 978-3-540-32207-8.
  14. ^ Makarychev, Konstantin; Makarychev, Yury (2017). "Approximation Algorithms for CSPs". Drops-Idn/V2/Document/10.4230/Dfu.vol7.15301.287: 39 pages, 753340 bytes. doi:10.4230/DFU.VOL7.15301.287. ISSN 1868-8977.
  15. ^ Battiti, Roberto; Protasi, Marco (1998). "Approximate Algorithms and Heuristics for MAX-SAT". Handbook of Combinatorial Optimization. pp. 77–148. doi:10.1007/978-1-4613-0303-9_2. ISBN 978-1-4613-7987-4.
  16. ^ Josep Argelich and Felip Manyà. Exact Max-SAT solvers for over-constrained problems. In Journal of Heuristics 12(4) pp. 375-392. Springer, 2006.
  17. ^ Jaulin, L.; Walter, E. (2002). "Guaranteed robust nonlinear minimax estimation" (PDF). IEEE Transactions on Automatic Control. 47 (11): 1857–1864. doi:10.1109/TAC.2002.804479.