Reputation: 133
I what to calculate the worst case, time complexity for this recursive function.
list is a list of m*n pieces.
matrix is a matrix of mxn to fill with this peaces.
Backtrack(list, matrix):
if(matrix is complete) //O(1)
return true
from the list of m*n pieces, make a list of candidatePieces to put in the matrix. // O(m*n)
for every candidatePiece // Worst case of n*m calls
Put that piece in the matrix // O(1)
if(Backtrack(list, matrix) is true)
return true
What I guess is that the formula is something like:
T(n*m) = T(n*m - 1) + O(n*m) + O(1) = T(n*m - 1) + O(n*m)
Is this correct?
I can't use the master theorem, witch other method can I use to get a closed formula?
Upvotes: 1
Views: 456
Reputation: 60207
So we have:
Backtrack(list, matrix):
if(matrix is complete) //O(1)
return true
from the list of m*n pieces, make a list of candidatePieces to put in the matrix. // O(m*n)
for every candidatePiece // Worst case of n*m calls
Put that piece in the matrix // O(1)
if(Backtrack(list, matrix) is true)
return true
We assume worst-case behaviour, some matrix is complete
is false for as long as possible. I'm going to assume this is up until O(n·m)
insertions.
So consider for every candidatePiece
. In order to get to the second item, you'll need Backtrack(list, matrix) is true
to be false at least once. This requires termination other than by return true
. As the only way to do that requires exhausting the loop, and that requires exhausting the loop (and so on), this can only happen if candidatePiece
is empty.
I assume pieces can only be used once. In this case, there are O(n·m)
things in the matrix. This means matrix is complete
has returned true
, so this actually doesn't let the loop run a second time.
If pieces don't run out, this is also obviously true.
We can then simplify to
Backtrack(list, matrix):
if(matrix is complete)
return true
if(candidatePiece is not empty)
Put the first piece in the matrix
if(Backtrack(list, matrix) is true)
return true
Then we have
T(n·m) = T(n·m - 1) + O(n·m)
as you said, which leads to Ashalynd's answer.
Assume instead that matrix is complete
can return false even when the matrix is full. Assume also that pieces run out, so the matrix can never be over-full. I also assume that pieces are removed as appropriate.
The outmost loop (loop 0) costs O(n·m)
and runs the inner loop (loop 1) n·m
times.
The inner loop (loop 1) costs O(n·m - 1)
and runs the inner inner loop (loop 2) n·m -1
times.
...
Loop n·m - 1
costs 1
and runs loop n·m
once.
Loop n·m
costs 0
(call overhead moves into loop n·m - 1
) and terminates.
So, let T(n·m)
be the cost for loop 0, and T(0)
be the cost for loop n·m
.
T(x) = O(x) + x · T(x - 1)
and WolframAlpha solves this to be O(Γ(x+1) + x·Γ(x, 1))
.
By some research, this is equal to
O(x! + x· [ (x-1)! · e⁻¹ · eₓ₋₁(1) ])
= O(x!) + O(x· eₓ₋₁(1))
= O(x!) + O(x· ∑ 1/k! from k=0 to x-1)
= O(x!)
So for x = n·m
we are talking O((n·m)!)
time. That's pretty poor :/.
Upvotes: 0
Reputation: 12573
If you unwind your formula, you'll get
T(n*m) = T(n*m-1)+O(n*m) = T(n*m-2)+O(n*m-1) + O(n*m) = ...
= O(n*m) + O(n*m-1) + O(n*m-2) +... + O(1) => ~ O(n^2*m^2)
But I wonder is that algorithm complete? It does not seem to return false at all.
Upvotes: 1