Reputation: 23
For a school task I have to create a graph and do some stuff with it. In input each vertex has an ID that is a number 0-999'999'999. As I can not create an array this long, I can't really use this ID as a key in adjacency matrix. My first solution was to create a separate ID that is arbitrary of the original one and store it in some kind of dictionary/map thing, but as I get 10'000 records of vertices the lookup is probably bound to get slow. The algorithm has to be under O(n^2) and I already have a BFS and a toposort in there. What would be the best solution in this case? As a side note - I can't use already established libraries (so I can't use graph, map, vector, string classes etc.), but I can code them myself, if that is the best option.
Upvotes: 0
Views: 152
Reputation: 4189
What you want is a binary search tree to do lookups in O(logn)
time or a hash map to do lookups in ~O(1)
time OR you can go with the array route in which case the size of your array would be the max value your ID can have (in your case, 10^9
).
As @amit told you, check AVL/Red-Black trees and hash maps. There's no better way to do lookups in a graph below O(n)
unless you can change the topology of the graph to turn it into a "search graph".
Upvotes: 1
Reputation: 4191
Why do you need to create an array of size 1 billion. You can simply create and adjacency matrix or adjacency list of number of nodes.
Whether number of vertices are constant or not, I'd suggest you to go for adjacency list. For example, you have 10 nodes,so you need to create an array of size 10, then for each nodes create a list of edges as you can see in the link above.
Consider this graph, do you really think you need to have 10^10 element in the adjacency list instead of 4 elements?
Upvotes: 0