Reputation: 81
Given a starting position k
and two jump sizes d1
and d2
, our task is to find the minimum number of jumps needed to reach x
if it is possible.
Input : k = 10, d1 = 4, d2 = 6 and x = 8
Output : 2
1st step 10 + d1 = 14
2nd step 14 - d2 = 8
Input : k = 10, d1 = 4, d2 = 6 and x = 9
Output : -1
-1 indicates it is not possible to reach x.
My code for above problem by recursion is :
from itertools import product
def position(k, d1, d2, p, count):
count += 1
arr = [[k], ['+', '-'], [d1, d2]]
x = list(product(*arr))
for i in range(len(x)):
if x[i][1] == '+':
x[i] = x[i][0] + x[i][2]
else:
x[i] = x[i][0] - x[i][2]
if x[i] == p:
return count
for i in range(2):
position(x[i], d1, d2, p, count)
if __name__ == "__main__":
y = [int(i) for i in input().split()]
k = y[0]
d1 = y[1]
d2 = y[2]
p = y[3]
print(position(k, d1, d2, p, 0))
For input: 10 4 6 8
x = [(10,'+',4), (10,'+',6), (10,'-',4), (10,'-',6)]
After that, x
= [14,16,6,4]
. And count
= 1
. Checking if each element of x
equal to 8. Now the x
is recalled for 1st argument of position()
as 14 & 16 like:
For 14:
x = [(14,'+',4), (14,'+',6), (14,'-',4), (14,'-',6)]
then, x
= [18,20,10,8]
now count become 2 and 8 is found at 4th index.
But the problem is that my code misses the recursion of 14 4 6 8
and None
is printed on the console. Also, I'm not able to figure out how to return -1 if it is not possible to reach x from my position().
Upvotes: 3
Views: 400
Reputation: 18645
You can formulate this as an integer linear program and then solve it using a tool like PuLP or Pyomo.
The idea is that you need to select nonnegative integers up1
, down1
, up2
and down2
such that up1*d1 - down1*d1 + up2*d2 - down2*d2 == p
. Further, you want to select values for these variables that minimize the total number of steps up1 + down1 + up2 + down2
.
Here's an example using PuLP (based loosely on this tutorial):
from pulp import *
def position(k, d1, d2, p):
prob = LpProblem("Minimum Steps", LpMinimize)
# define decision variables (values to be chosen by solver)
up1 = LpVariable('up1', lowBound=0, cat='Integer')
down1 = LpVariable('down1', lowBound=0, cat='Integer')
up2 = LpVariable('up2', lowBound=0, cat='Integer')
down2 = LpVariable('down2', lowBound=0, cat='Integer')
# define objective function (expression to minimize)
num_steps = up1 + down1 + up2 + down2
prob += num_steps
# define main constraint (rule to enforce)
prob += (k + up1*d1 - down1*d1 + up2*d2 - down2*d2 == p)
# solve with a time limit, because otherwise CBC may search forever
prob.solve(PULP_CBC_CMD(maxSeconds=2))
# if you have CPLEX installed, it can detect infeasibility immediately
# prob.solve(CPLEX_CMD())
status = LpStatus[prob.status]
solution = [str(k)]
if status == 'Optimal':
# report steps
steps = [
(1, up1, 'd1'), (-1, down1, 'd1'),
(1, up2, 'd2'), (-1, down2, 'd2')
]
for (sign, v, step) in steps:
if v.varValue > 0:
solution.append('-' if sign < 0 else '+')
solution.append('{} * {}'.format(int(v.varValue), step))
solution.extend(['=', str(p)])
print(' '.join(solution))
return int(num_steps.value())
elif status in {'Infeasible', 'Not Solved', 'Undefined'}:
# infeasible or timed out (probably infeasible)
return -1
else:
raise RuntimeError("Problem status was {}".format(status))
print(position(10, 4, 6, 8))
# 10 + 1 * d1 - 1 * d2 = 8
# 2
print(position(10, 4, 6, 9))
# -1
print(position(10, 171, 53, 8))
# 10 - 9 * d1 + 29 * d2 = 8
# 38
print(position(10, 3, 4, 1000000001))
# 10 + 1 * d1 + 250000000 * d2 = 1000000001
# 250000001
Integer linear program solvers use a branch-and-bound technique. They first find the best solution with fractional values for the variables (number of steps), then successively divide the allowed region into smaller sections with integer edges, and eventually stop when they find the best solution at an edge.
For small values of k
, d1
, d2
and p
, this should find a solution very quickly (as should a brute-force recursive solution). But for infeasible problems, the naive approach can go on forever (as does a brute-force recursive solution). The commercial solvers (e.g., CPLEX or Gurobi) can identify this infeasibility and quickly return, but the open-source solvers (CBC or GLPK) may run for a very long time or forever (I waited a few minutes and gave up).
I resolved this problem here by using a time limit on the solver and assuming the problem was infeasible if no solution was found before the limit. You could do something similar in your recursive approach, e.g., set a limit on the number of steps to take. Even if you need thousands or millions of steps, the linear programming approach tends to do OK (see last example), because it homes in close to the right solution quickly and then refines to get an exact match. I would guess that the recursive approach will take too long in this case. But you may be able to devise cases where the linear programming approach fails to find a solution within the time limit, even if a solution exists.
... After further checking, I've found that the CBC solver does not perform well when the problem is numerically ill-defined or requires a lot of branches. For example, this times out, even though an answer is possible: position(10, 100000, 100001, 10000000)
. And this reports an invalid answer: position(10, 10000000, 10000001, 10000000)
(it's happy to use the almost-true answer 10 + 1 * d2 = 10000000
). On the other hand, these cases may also be very hard to solve via a brute-force method. So maybe thinking deeper into the nature of the problem will help. (I'll try to do that later, but I've already been nerd-sniped for too long today!)
Upvotes: 1