Reputation: 373042
I was talking with a student the other day about the common complexity classes of algorithms, like O(n), O(nk), O(n lg n), O(2n), O(n!), etc. I was trying to come up with an example of a problem for which solutions whose best known runtime is super-exponential, such as O(22n), but still decidable (e.g. not the halting problem!) The only example I know of is satisfiability of Presburger arithmetic, which I don't think any intro CS students would really understand or be able to relate to.
My question is whether there is a well-known problem whose best known solution has runtime that is superexponential; at least ω(n!) or ω(nn). I would really hope that there is some "reasonable" problem meeting this description, but I'm not aware of any.
Upvotes: 21
Views: 6447
Reputation: 2613
Two elementary results in the conversion of regular expressions of complements of regular languages involve truly superexponential runtime. See this Computer Science question for more details.
There are various differeing definitions of exponential time.
B
x
for some B > 1
. This defines the E complexity class.B
g(x)
where g
is polynomial in x
. Polynomials do not require any exponents to express.
n!
, which asymptotically dominates any simple exponential function, is bounded above by n
n
, which can be converted to e
n ln n
. Therefore, the expression n
n
is still singly exponential, and factorial time still qualifies under EXPTIME.2
2
x
is a doubly exponential function. This function cannot be expressed using two font sizes. This function is outside of EXPTIME, but is in 2-EXPTIME.k
-EXPTIME time complexities.
If you are thinking about any algorithm, including those with little practical value, the Ackermann function and the Knuth-up-arrow function comes into mind. This is among the first algorithms known to be non-primitive recursive, also it is asymptotically faster than any primitive recursive function.
Computing a Knuth-up arrow function A(b,x,n)
= b
↑
n
x
for nontrivial base b
and 'superexponent' x
requires 'super-primitive recursive' time, also of the order of the (n + 2)
-th hyperoperation.
A regular expression R
(not a POSIX regex) is a template that specifies a match pattern in text. Let L
be the set of all text strings of the form described by R
. They are used to determine whether a string is valid or not as an input in constant space; the regular expression ^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$
describes a valid email address.
Kleene's theorem: It is well-known that for every regular language L
in some finite alphabet Σ
, there exists a regular expression R
and a deterministic finite-state machine M
that produces exactly the language L
.
From Klenne's theorem, the complement of the regular language ∁L
is another regular language. The DFA M-
for ∁L
is identical to the DFA M
for L
except that the states being accepting or nonaccepting are all inverted. Derive the regular expression for ∁L
without the negation bar.
Answer: This problem is surprisingly nasty. The regular expression for the complement language is doubly exponential to the length of the regular expression of the base language. A proof has been provided on Page 8 of an Arxiv article by Wouter Gelade and Frank Neven.
Definition: A generalized regular expression is a regular expression that also allows the complementaion operator, allowing sections that is not of the regex form to precede or follow sections that are of the regex form.
Letting ¬
as the complementation operator, an example is ^¬((0|1)*00(0|1)*)[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$
, which describes all strings that can be partitioned into a part that do not contain two zeroes in a row and a part that represents an email address.
Generalized regular expressions have the exact same strength as standard regular expressions without complementation, but can be used to succinctly represent a regular language containing a part that is described as the complement of a regular language with a short regular expression.
Your task is to convert the generalized regular expression into a standard regular expression.
Answer: You can recursively convert each complemented expression into a NFA, and then convert each NFA into a DFA. However, the running time of this algorithm is nonelementart 2
↑↑
Θ(n)
, since each layer of recursive negation exponentially increases the number of states of the DFA. In fact, this problem still remains nonelementary if Kleene closure is disallowed. (source: Algorithms by Jeff Erickson)
Upvotes: 1
Reputation: 51296
Maximum Parsimony is the problem of finding an evolutionary tree connecting n DNA sequences (representing species) that requires the fewest single-nucleotide mutations. The n given sequences are constrained to appear at the leaves; the tree topology and the sequences at internal nodes are what we get to choose.
In more CS terms: We are given a bunch of length-k strings that must appear at the leaves of some tree, and we have to choose a tree, plus a length-k string for each internal node in the tree, so as to minimise the sum of Hamming distances across all edges.
When a fixed tree is also given, the optimal assignment of sequences to internal nodes can be determined very efficiently using the Fitch algorithm. But in the usual case, a tree is not given (i.e. we are asked to find the optimal tree), and this makes the problem NP-hard, meaning that every tree must in principle be tried. Even though an evolutionary tree has a root (representing the hypothetical ancestor), we only need to consider distinct unrooted trees, since the minimum number of mutations required is not affected by the position of the root. For n species there are 3 * 5 * 7 * ... * (2n-5) leaf-labelled unrooted binary trees. (There is just one such tree with 3 species, which has a single internal vertex and 3 edges; the 4th species can be inserted at any of the 3 edges to produce a distinct 5-edge tree; the 5th species can be inserted at any of these 5 edges, and so on -- this process generates all trees exactly once.) This is sometimes written (2n-5)!!, with !! meaning "double factorial".
In practice, branch and bound is used, and on most real datasets this manages to avoid evaluating most trees. But highly "non-treelike" random data requires all, or almost all (2n-5)!! trees to be examined -- since in this case many trees have nearly equal minimum mutation counts.
Upvotes: 9
Reputation: 252
This is not a practical everyday problem, but it's a way to construct relatively straightforward problems of increasing complexity.
This approach gives us a decidable super-F problem for every time-constructible function F: find the first string of length n (some large constant) that is incompressible under time-bound F.
Upvotes: 1
Reputation: 22565
Showing all permutation of string of length n is n!, finding Hamiltonian cycle is n!, minimum graph coloring, ....
Edit: even faster Ackerman functions. In fact they seems without bound function.
A(x,y) = y+1 (if x = 0)
A(x,y) = A(x-1,1) (if y=0)
A(x,y) = A(x-1, A(x,y-1)) otherwise.
from wiki:
A(4,3) = 2^2^65536,...
Upvotes: 7
Reputation: 578
Do algorithms to compute real numbers to a certain precision count? The formula for the area of the Mandelbrot set converges extremely slowly; 10118 terms for two digits, 101181 terms for three.
Upvotes: 5