user3222184
user3222184

Reputation: 1111

find all subsets that sum to a particular value. Recursion or DP?

find all subsets that sum to a particular value.

Given a set of numbers: {1, 3, 9, 12} and a target value = 12. find the distinct subset that sums to a target value.

ANS: [3,9], [12].

The problem has been asked here.

There are obviously two approaches to solve this kind of problem. Recursion or DP. Question: how do you trade off these two approaches?

My Solution: DP is harder but it consumes less memory. In recursion, you have to recursively call the function several times, this introduces functional over head. But it is "considerably" easier but consume more memory.

What do you guys think?

Upvotes: 0

Views: 589

Answers (2)

arunk2
arunk2

Reputation: 2426

I guess DP can be implemented with recursion or iterative based on the approach (top-down or bottom-up). The main difference between a generic recursive solution and DP is whethere to use extra memory or not and that will be the trade-off for runtime of algorithm. Fundamentally you are avioding the extra computation by storing and referencing it.

For the question generic recursion or DP, trade-off will be between the memory used in DP VS runtime in generic recursion.

Another thing to consider is not all the problem qualify for DP approach. The problem under consideration, has the following properties

  1. 'Optimal Substructure' - optimal solution of the given problem can be obtained by using optimal solutions of its subproblems.
  2. 'Overlapping Subproblems' - solutions of same subproblems are used multiple times.

The above 2 properties are necessary for a DP implementation. Otherwise DP won't give you any optimisation.

For example: Most of the divide and conquer method don't have 'Overlapping Subproblems' property. Binary Search doesn’t have.

Hope it helps!

Upvotes: 1

DixonD
DixonD

Reputation: 6638

Generally, recursion without any kind of caching can lead to an exponential complexity solution to a problem which could be solved with polynomial or linear complexity. That's because you compute the same partial problems many times. A recursive or iterative solution with memoization can decrease the complexity by using more memory.

That being said, you need to factor the constraints of your input. For bigger inputs, an exponential solution is often useless, so you don't have much a choice. At the same time, using additional memory in most cases is not really a problem unless you develop something for systems with very limited memory (like embedded systems).

To summarize, in most cases, you would want to trade-off memory in favor of the algorithm complexity, but you need to decide it on a case-by-case basis.

Upvotes: 1

Related Questions