leontp587
leontp587

Reputation: 791

permutation calculation runtime complexity with some changes

I have a question about the runtime complexity of the standard permutation finding algorithm. Consider a list A, find (and print) all permutations of its elements.

Here's my recursive implementation, where printperm() prints every permutation:

def printperm(A, p):
    if len(A) == len(p):
        print("".join(p))
        return

    for i in range(0, len(A)):
        if A[i] != 0:
            tmp = A[i] # remember ith position
            A[i] = 0 # mark character i as used
            p.append(tmp) # select character i for this permutation
            printperm(A, p) # Solve subproblem, which is smaller because we marked a character in this subproblem as smaller
            p.pop() # done with selecting character i for this permutation
            A[i] = tmp # restore character i in preparation for selecting the next available character


printperm(['a', 'b', 'c', 'd'], [])

The runtime complexity appears to be O(n!) where n is the size of A. This is because at each recursion level, the amount of work decreases by 1. So, the top recursion level is n amount of work, the next level is n-1, and the next level is n-2, and so on. So the total complexity is n*(n-1)*(n-2)...=n!

Now the problem is the print("".join(p)) statement. Every time this line runs, it iterates through the list, which iterates through the entire list, which is complexity n. There are n! number of permutations of a list of size n. So that means the amount of work done by the print("".join(p)) statement is n!*n.

Does the presence of the print("".join(p)) statement then increases the runtime complexity to O(n * n!)?? But this doesn't seem right, because I'm not running the print statement on every recursion call. Where does my logic for getting O(n * n!) break down?

Upvotes: 1

Views: 321

Answers (2)

user13668882
user13668882

Reputation: 1

Permutations can be generated from combinations using divide and conquer technique to achieve better time complexity. However, generated output is not in order.

Suppose we want to generate permutations for n=8 {0,1,2,3,4,5,6,7}. Below are instances of permutations 01234567 01234576 03215654 30126754 Note first 4 items in each arrangement are of same set {0,1,2,3} and last 4 items are of same set {4,5,6,7} This means given set A={0,1,2,3} B={4,5,6,7} permutations from A are 4! and from B are 4!.

It follows that taking one instance of permutations from A and one instance of permutations from B we get a correct permutation instance from universal set {0,1,2,3,4,5,6,7}.

Total permutations given sets A and B such that A union B and A equals universal set and B intersection is null set gives universal set are 2*A!*B! (By swapping A and B order In this case we get 2*4!*4! = 1156.

So to generate all permutations when n=8, we simply need to generate possible combinations of sets A and B that satisfy conditions explained earlier. Generating combinations of 8 elements taking 4 at time in lexicographical order is pretty fast. We need to generate the first half combinations each assign to respective set A and set B is found using set operation. For each pair A and B we apply divide and conquer algorithm.

A and B doesn't need to be of equal sizes and the trick can be applied recursive for higher values of n.

In stead of using 8! = 40320 steps We can use 8!/(2*4!4!)(4!+4!+1) = 1750 steps with this method. I ignore the cross product operation when joining set A and set B permutations because its straight forward operation.

In real implementation this method runs approximately 20 times faster than some naive algorithm and can take advantage of parallelism. For example different pairs of set A and B can be grouped and process on different threads.

Upvotes: 0

Tim Peters
Tim Peters

Reputation: 70602

You're basically right! The possible confusion comes in your "... and the next level is n-2, and so on". "And so on" is glossing over that at the very bottom level of the recursion, you're not doing O(1) work, but rather O(n) work to do the print. So the total complexity is proportional to

n * (n-1) * (n-2) ... * 2 * n

which equals n! * n. Note that the .join() doesn't really matter to this. It would also take O(n) work to simply print(p).

EDIT: But that's not really right, for a different reason. At all levels above the print level, you're doing

for i in range(0, len(A)):

and len(A) doesn't change. So every level is doing O(n) work. To be sure, the deeper the level the more zeroes there are in A, and so the less work the loop does, but it's nevertheless still O(n) merely to iterate over range(n) at all.

Upvotes: 1

Related Questions