SemperCallide
SemperCallide

Reputation: 2080

How does iterative deepening affect time complexity?

I have a tree traversal algorithm which generally operates in O(bm) where b is the branching factor and m is the max depth.

Using iterative deepening, this algorithm is run over and over again, m times at increasing depth:

O(bm) = b⁰ + b¹ + b² + ... + bm

Based on my limited understanding of time complexity, we take the largest element because that is the most significant one over time, and so that element would be bm, where m is the max depth reached.

So, with that knowledge I would conclude that the iterative deepening algorithm also runs in O(bm).

However I have trouble understanding, from a logical standpoint, how the tree traversal could have the exact same time complexity whether the algorithm is run once at depth m, or m times up until depth m.

bm is inherently less than Σk=0,..,mbk. Therefore shouldn't the time complexity on iterative deepening be higher?

Or is there something I don't understand?

Upvotes: 3

Views: 1708

Answers (2)

Simeon Visser
Simeon Visser

Reputation: 122516

Basically you're asking why the following two functions have the same time complexity in terms of big O (as they're both O(n^m)):

  • n^0 + n^1 + n^2 + ... + n^m
  • n^m

The reason is that, at some point, for values of n and m the term n^m dwarfs all the other terms of these functions. As the input grows the runtime of the function as a whole will be determined by n^m.

Even if you come up with a function that takes n^m + 1000000000000 then at some point n^m will still be the deciding term in how long it takes to run. In other words, the runtime of both functions is bounded by a function with the term n^m times some constant.

In your example, running tree traversal once at depth m or running it m times up until depth m have the same time complexity because, as the tree grows, the runtime of running at depth m dwarfs all the other runs. Looking at how long it takes to run at depth m is basically all that is needed to find a function that bounds the runtime of both tasks.

To give another example, we can think of two functions that are both O(n):

  • f1(n) = 1000000000n + 5
  • f2(n) = 3n

It may seem that f1 does more work as n grows so it's not "fair" to say both are O(n). But their time complexity is bounded by a linear function: for some (large) constant c I can say that the runtime of both functions will be below f(n) = cn and thus the time complexity as a whole is O(n).

Upvotes: 3

anatolyg
anatolyg

Reputation: 28300

"Exact same" time complexity is not the same as "taking the exact time". Saying "Exact same time complexity" is like saying "growing with the same speed, up to a constant factor", which is a rough estimation.

For example, if b = 3, you are comparing these two sequences of numbers:

3^m,             or (1, 3,  9, 27,  81, ...)
3^0+3^1+...+3^m, or (1, 4, 13, 40, 121, ...)

Let's denote the first number D(m) (for "DFS"), and the second one I(m) (for "iterative deepening"), then

I(m) = 3/2 * D(m) - 1/2

It is certainly true that I(m) is greater than D(m), but they have the same "Order" of growth. This idea is denoted by I(m) = O(D(m)).

Mathematically, I(m) = O(D(m)), because there exists such a constant k that I(m) < k * D(m) for all m; here k = 3/2.

Upvotes: 2

Related Questions