Reputation: 863
How can I find the number of ways a sequence of numbers (may contain similar items) could be rearranged so that a number is not placed in the same place as it or its similar numbers were placed.
For example, [0,0,0,1,1,1]
could be rearranged in only one way and that is [1,1,1,0,0,0]
.
[0,0,0,1,1,1,1]
cannot be arranged in any way.
[1,2,2,14]
can be arranged in 2 ways i.e [2,1,14,2], [2,14,1,2]
.
[1,1,2,2,14]
can be arranged in 4 ways i.e [14,2,1,1,2], [2,2,14,1,1], [2,2,1,14,1], [2,14,1,1,2]
.
The math solution is available but I was thinking some easy way out using the programming concepts. Math code is is a little like this.. (Excuse me i could not post in the correct format)
∫∞0 Ln1(x)..Lnr(x)e−xdx
where r is the number of items, ni is the number of occurrences of item i and Lk is the k-th Laguerre polynomial. For instance, for 1,1,2,2,14, we have r=3, n1=2, n2=2, n3=1, so up to a sign the number of rearrangements is
∫∞0 L2(x)L2(x)L1(x)e−xdx
= ∫∞0 12(x2−4x+2)12(x2−4x+2)(1−x)e−xdx
= ∫∞0(−14x5+94x4−7x3+9x2−5x+1)e−xdx
= −4
But I was thinking if there are any python libraries which could generate all permutations as desired by us.
Upvotes: 1
Views: 1351
Reputation: 56654
More involved, but should be quite a bit faster on long input sequences:
from collections import Counter
def _rearrange(orig, rem):
if len(orig)==1:
v = rem[0][0]
if v != orig[0]:
yield [v]
else:
o, orig = orig[0], orig[1:]
for i,(v,c) in enumerate(rem):
if v != o:
for r in _rearrange(orig, rem[:i]+([(v,c-1)] if c>1 else [])+rem[i+1:]):
yield [v]+r
def rearrange(orig):
if len(orig) > 1:
return list(_rearrange(orig, Counter(orig).items()))
else:
return []
def main():
print rearrange([0,0,0,1,1,1])
print rearrange([1,1,2,2,14])
if __name__=="__main__":
main()
results in
[[1, 1, 1, 0, 0, 0]]
[[2, 2, 1, 14, 1], [2, 2, 14, 1, 1], [2, 14, 1, 1, 2], [14, 2, 1, 1, 2]]
.
Edit: Comparing the functions' runtimes, I get:
(blue for Tim's, green for mine; dots are data, lines are best fit (least-squares on log of data); x axis is sequence length, y axis is runtime in seconds (log scale). Data points are best of ten runs on each of twenty random sequences for each sequence length.)
Conclusions:
Tim's fn's runtime grows as 7.44 * * n; mine grows as 3.69 * * n.
On a ten-item sequence, his fn averages 53.9s vs 0.93s for mine; the difference slightly more than doubles for each extra item in the sequence.
His fn has a much more consistent runtime; mine is highly variable, depending on the number of repeated items in the sequence.
Straight-line fit does not look like the best predictor; runtimes should be a function of n!, not k * * n. I am not sure how to model this, though. Suggestions?
Upvotes: 3
Reputation: 11173
Have you tried itertools.permutations?
http://docs.python.org/library/itertools.html#itertools.permutations
import itertools
def my_combos(val):
results = []
l = itertools.permutations(val, len(val))
for c in l:
if all([x != y for (x,y) in zip(c,val)]):
results.append(c)
return list(set(results))
print my_combos([0,0,0,1,1,1])
print my_combos([1,1,2,2,14])
Yields:
[(1, 1, 1, 0, 0, 0)]
[(2, 14, 1, 1, 2), (2, 2, 1, 14, 1), (14, 2, 1, 1, 2), (2, 2, 14, 1, 1)]
Upvotes: 3
Reputation: 336178
Brute-force using itertools
:
import itertools
def arrangements(arr):
p = itertools.permutations(arr)
return set(item for item in p if all(x!=y for x,y in zip(item,arr)))
Result:
>>> arrangements([0,0,0,1,1,1])
{(1, 1, 1, 0, 0, 0)}
>>> arrangements([0,0,0,1,1,1,1])
set()
>>> arrangements([1,2,2,14])
{(2, 14, 1, 2), (2, 1, 14, 2)}
>>> arrangements([1,1,2,2,14])
{(2, 14, 1, 1, 2), (2, 2, 1, 14, 1), (14, 2, 1, 1, 2), (2, 2, 14, 1, 1)}
Upvotes: 1