Reputation: 4239
I have implemented Depth First Search algorithm in an iterative manner and a recursive manner. They both work fine on files with small sizes (less than 1 MB). However, when I try to run them over files with 50 MB, it seems like that the recursive-DFS (9 secs) is much faster than that using an iterative approach (at least several minutes). In fact, the iterative approach took ages to finish.
The only reason I chose to implement the iterative DFS is that I thought it may be faster than the recursive DFS.But this does not seem to be the case. Is this expected ?
Note that: I was already using java -Xmx1024m -Xms1024m -Xmn256m -Xss16m RunAlgo
to increase the memory.
Below is the code I used to write the iterative DFS.
class IterativeDFS{
long time;
LinkedList<Vertex>topological_sort_list = new LinkedList<Vertex>();
public IterativeDFS(Digraph G){
dfs(G);
}
public void dfs(Digraph G){
for(Vertex u : G.getAllVertices()){
u.set_color("WHITE");
u.set_pi(-1);
}
time = 0;
for(Vertex u : G.getAllVertices()){
if(u.get_color().equals("WHITE")){
dfs_stack(G, u);
}
}
}
public void dfs_stack(Digraph G, Vertex u){
int size = G.getAllVertices().size();
/*
* to be able to iterate over each adjacency list, keeping track of which
* vertex in each adjacency list needs to be explored next.
*/
HashMap<Vertex, Iterator<Vertex>> adj_map = new HashMap<Vertex, Iterator<Vertex>>();
for(Vertex i : G.getAllVertices()){
adj_map.put(i, G.adjEdges(i).iterator());
}
Stack<Vertex> stack = new Stack<Vertex>();
// time++; // white vertex u has just been discovered
u.set_d(time);
u.set_color("GRAY");
stack.push(u);
while(!stack.empty()){
Vertex k = stack.peek();
Vertex v = null;
if(adj_map.get(k).hasNext()){
v = adj_map.get(k).next(); // explore edges (k,v)
if(v.get_color().equals("WHITE")){
v.set_pi(k.get_node());
// time++;
v.set_d(time);
v.set_color("GRAY");
stack.push(v);
}
} else{
// v's adjacency list is exhausted
Vertex t = stack.pop();
time++;
t.set_f(time);
t.set_color("BLACK");
/*
* Topological Sort :
* 1. call DFS(G) to compute finishing times v.f for each vertex v
* 2. as each vertex is finished, insert it onto FRONT of linked list
* 3. return linked list of vertices
*/
topological_sort_list.addFirst(t);
}
}
}
public LinkedList<Vertex> topological_sort(){
return topological_sort_list;
}
}
Upvotes: 1
Views: 3331
Reputation: 11
If you're talking about time complexity, the answer is that both of them are the same. Regardless of implementation, the runtime of DFS is O(V), because you should be visiting each vertex in the graph exactly once. Moreover, the runtime is also Omega(V), because you access all the vertices in the graph once regardless of input size. This leaves the runtime of DFS at Theta(V). The overall DFS algorithm remains the same regardless of implementation. Therefore, the runtime of recursive vs. iterative DFS should both be the same, at Theta(V).
Upvotes: 1