2006 | OriginalPaper | Buchkapitel
Further Reflections on a Theory for Basic Algorithms
verfasst von : Allan Borodin
Erschienen in: Algorithmic Aspects in Information and Management
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Can we optimally solve
Max
2
SAT
in (say) time (|
F
| log|
F
|) where |
F
| is the length of formula
F
. Of course, since
Max
2
SAT
is
NP
-complete, we can confidently rely on our strongly held belief that no
NP
-hard problem can be solved optimally in polynomial time. But obtaining
unconditional
complexity lower bounds (even linear or near linear bounds) remains the central challenge of complexity theory. In the complementary fields of complexity theory and that of algorithm design and analysis, we ask questions such as “what is the best polynomial time approximation ratio” that can be achieved for
Max
2
SAT
. The best negative results are derived from the beautiful development of PCP proofs. In terms of obtaining better approximation algorithms, we appeal to a variety of algorithmic techniques, including very basic techniques such as greedy algorithms, dynamic programming (with scaling), divide and conquer, local search and some more technically involved methods such as LP relaxation and randomized rounding, semi-definite programming (see [34] and [30] for an elegant presentation of these randomized methods and the concept of derandomization using conditional expectations). A more refined question might ask “what is the best approximation ratio (for a given problem such as
Max
2
SAT
) that can be obtained in (say) time
O
(
n
log
n
)” where
n
is the length of the input in some standard representation of the problem. What algorithmic techniques should we consider if we are constrained to time
O
(
n
log
n
)?