jcm
jcm

Reputation: 5659

Is this Dynamic Programming Solution to Text Justification just Brute Force?

I'm having trouble understanding the dynamic programming solution to the text justification problem as specified in the MIT open courseware lecture here. Some notes from that lecture are here, and page 3 of the notes is what I am referring to.

I thought that Dynamic Programming meant you memoize some of the computations so that you don't need to recompute, thus saving you time, but in the algorithm given in the lecture, I don't see any use of memoization, just a whole bunch of deep recursive calls, i.e. the main function is this:

DP[i] = min(badness (i, j) + DP[j] for j in range (i + 1, n + 1))
DP[n] = 0

where badness is a function that determines the the amount of unused space after subtracting the length of the words from the line length. To me it looks like this algorithm calculates all possible "badness" calculations and chooses the smallest one, which seems like brute force to me. Where is the advantage Dynamic Programming usually gives us by memoizing past calculations so we don't have to recompute?

Upvotes: 4

Views: 869

Answers (1)

Juan Lopes
Juan Lopes

Reputation: 10565

If you memoize the results, you don't have to compute each DP[i] several times.

That is, DP[0] "calls" DP[2] for example, but so does DP[1]. In the second time DP[2] is called, it won't be necessary to compute it again, you can just return the memoized value.

This also makes it easy to verify a polynomial upper bound for this algorithm. Since each DP[i] will perform O(n) operations, and there are n of them, the overall algorithm is O(n^2), assuming, of course, that badness(i, j) is O(1).

Upvotes: 2

Related Questions