Reputation:
I am struggling to find a DP recursion for the following problem:
Given N intervals of consecutive positive numbers
and M
, find how many possibilities of summing n numbers (one from each interval) from n given intervals to k exist.
for instance:
n = 2, k = 4
where the n intervals are:
[0, 1, 2]
[0, 1, 2]
so there is only one valid solution (2 + 2).
I'm looking for a bottom-up approach. This is what i've tried:
long getPossibilities(int N, int M, vector<vector<int>> &limits) {
vector<vector<long>> dp (N, vector<long>(M + 1, 0));
for(int i = 0; i < N; i++) {
for(int k = limits[i][0]; k <= limits[i][1]; k++){
dp[0][k] = 1;
}
}
for(int i = 1; i < N; i++) {
for(int j = 1; j <= M; j++) {
dp[i][j] = dp[i - 1][j];
for(int k = limits[i][0]; k <= limits[i][1]; k++) {
if(j - k >= 0) {
dp[i][j] = (dp[i][j] + dp[i][j - k]) % 1000000007;
}
}
}
}
return dp[N - 1][M];
}
Any suggestions?
Upvotes: 0
Views: 2141
Reputation: 58409
If you have an array A[i][j]
representing the number of ways to sum to i
with the first j
ranges, and the j+1
st range is from a
to b
, then you have the relation:
A[i][j+1] = sum(A[x][j] for x = i-a to i-b)
(treating out of bounds reads as 0's).
This update step risks taking O(M^2) time if b-a
is large, and this is probably why you solution is timing out. You can avoid that by computing cumulative sums first: let B[i][j] = sum(A[i'][j] for i'=0 to i)
.
Then A[i][j+1] = B[i-a][j] - B[i-b-1][j]
(*)
The process would be:
A[i][0] = 1
if i=0
, else 1
j=0
B[i][j]
for each i
from 0 to M
by summing up elements of A[_][j]
.A[i][j+1]
for each i
from 0 to M
using the equation (*).j
until j=n+1
.When you've finished, A[M][n]
is the result.
If you're clever you can probably use a single array of size M
rather than two arrays of size M
by n
.
Upvotes: 1
Reputation: 11602
I am assuming that all interval entries are positive. One dynamic programming approach is to represent state as (n, t)
where n
is the index of current interval and t
is the target sum. For example, the initial state in your example is n=0
and t=4
.
Let f(n, t)
denote the number of ways in which we can choose one element from interval n
till the last s.t. the sum of chosen elements is t
. Then, in pseudocode,
f(n, t) = sum(f(n+1), t - row[n][j]) for j <= len(row[n]))
Here is one possible implementation using Python; the naive
function` is included to check for correctness (assumes that all intervals are of the same length).
def naive(xs, target):
from itertools import product
res = 0
for tup in product(*xs):
res += sum(tup) == target
return res
def one_per_row_sums(xs, target):
n = len(xs)
if n == 0:
return 0
m = len(xs[0])
assert all(len(x) == m for x in xs)
def f(n, k):
if k < 0:
return 0
if n == 0:
return sum(1 if x == k else 0 for x in xs[n])
else:
return sum(f(n-1, k-j) for j in xs[n])
return f(n-1, target)
xs = [[0, 1, 2], [0, 1, 2]]
assert naive(xs, 4) == one_per_row_sums(xs, 4)
# larger test
import numpy as np
n = 6
m = 6
xs = np.sort(np.random.randint(0, 10, (n, m)), axis=1)
assert one_per_row_sums(xs, 20) == naive(xs, 20)
Upvotes: 0