Reputation: 121
I have the following recursive implementation of the depth-first search. At the end the array pre shows in which order the several vertices were visited. I want to run it for really big graphs. In this example: n = 2097152, m = 6291408. G is represented as an adjacency list.
static int cnt, pre[2097152];
static link t;
void dfsRAdjazenz(Graph G, Edge e)
{
int w = e.w
pre[w] = cnt++;
for (t = G->adj[w]; t != NULL; t = t->next) {
if (pre[t->v] == -1) {
Edge e = {w, t->v};
dfsRAdjazenz(G, e);
}
}
}
int main (int argc, const char * argv[])
{
cnt=0;
int v;
for (v = 0; v < 2097152; v++) {
pre[v] = -1;
}
Edge e = {1,1};
dfsRAdjazenz(G, e);
return 0;
}
The problem is I always run out of memory -> segmentation fault, although there is really enough memory available. It just works for around about 1million nodes.
Edit: the data structures:
typedef struct node *link;
struct node {
int v;
link next;
};
struct graph {
int V;
int E;
link *adj;
};
typedef struct {
int v;
int w;
} Edge;
typedef struct graph *Graph;
Upvotes: 0
Views: 235
Reputation: 1492
It might be due to the recursion you are using, you are exhausting the stack. if the graph is not sparse then the recursion can be very deep, and it also explains why it works with small values.
Upvotes: 2
Reputation: 70523
This strikes me as odd
t = G->adj[w]
You are using array indexing into what looks to be a linked list.
I'd expect that to seg fault for any value other than 1.
Of course it is hard to tell without seeing the definition of G.
Upvotes: 1