Mike Sweeney
Mike Sweeney

Reputation: 2046

Minimum-size vertex cover of tree: Dynamic Programming Formulation

I'm trying to understand how to formulate the problem of finding a minimum-size vertex cover of a tree as a dynamic programming problem and am having some trouble. To me the non-dynamic programming formulation involving a depth-first search makes the most intuitive sense. Essentially this involves doing a DFS to the leaf nodes, including their parent nodes in the minimum size vertex cover, and repeating up to the root. The pseudocode is like this:

// DFS based solution
find_minimum_vertex_cover_dfs(root) {
    // leaf nodes aren't in minimum size vertex cover
    if (root == NULL || isLeafNode(root)) {
        return;
    }

    for (each child node of root) {
        find_minimum_vertex_cover_dfs(child node);
        // child isn't in minimum size vertex cover so need to cover edge
        // from current root to child by including current root
        if (!isInMinCover(child node)) {
            include root in minimum vertex cover;
        }
    }
}

The dynamic programming formulation that I got from here is as follows:

DynamicVC(root):
    for each child c:
        Best[c][0], Best[c][1] = DynamicVC(c)

    withoutRoot = sum over all c of Best[c][1]
    withRoot = 1 + sum over all c of min(Best[c][0], Best[c][1])

    return (withoutRoot, withRoot)

I think I understand the idea of the subproblems being computing the minimum size vertex cover for the subtree rooted at each vertex including that vertex in the cover and excluding that vertex from the cover. I have two questions:

  1. It seems most natural to me to exploit the fact that the leaf nodes will never be in the minimum vertex cover and hence use the DFS-based solution. Why would one use the dynamic programming solution to this problem?
  2. I'm used to building the dynamic programming matrix from the bottom up iteratively without actually using recursion. In this case, however, since I need to start by computing the solutions for the leaf nodes traversing the tree using recursion to build the dynamic programming matrix from the leaf nodes up seems most intuitive to me. It just seems odd to me because all the dynamic programming problems I've worked on to this point have avoided the need for this. Am I missing something here?

Edit: As I thought about it some more maybe what was confusing me was that in this case doing a DFS on the tree recursively is what is more familiar to me. I've been doing a bunch of dynamic programming problems but this is the first one involving tree/graph traversal, and in the other problems I can just use some loops to compute the larger and larger subproblems. I suppose I could make the dynamic programming version more familiar to me by using an explicit stack and doing the tree traversal that way as opposed to via recursion. Does that make sense?

Upvotes: 2

Views: 2705

Answers (1)

piotrekg2
piotrekg2

Reputation: 1257

1: There is no really good reason why. It just works so why not use it. For me the DP solution you've shown is more intuitive than the recursive one.

2: Dynamic programming is about subproblems memoization of a recursive solution. Coming up with a DP algorithm often involves defining a recursion first and then adding memoization to it. A recursive solution can automatically be transformed into a DP: just create a global hash table of type (subproblem id -> result) and at the beginning of the recursive call check if the hashmap already contains a result for the given subproblem, if so then return it right away otherwise compute it and put it into the hashmap. Such approach will often be as fast as a bottom-up approach you mentioned.

Upvotes: 1

Related Questions