Reputation: 19
I have the following function taking two NetworkX graphs as input. I debug printed the adjacency matrix and the edges returned by NetworkX, which do not match. If I hardcode the adjacency matrix, everything works as expected. Please have a look at the code and output down below.
def prepareData(g1, g2):
graphs = [(g1, g2)]
Graphs = []
for graph in graphs:
for inergraph in graph:
Graphs.append(inergraph)
graphs = Graphs
for i, g in enumerate(graphs):
n_nodes = g.number_of_nodes()
n_edges = g.number_of_edges()
edges = np.array(g.edges(), dtype=np.int32)
print(nx.to_numpy_array(g))
print()
print(edges)
print()
This produces the following output:
[[0. 1. 0. 0. 0. 0. 1. 0. 0.]
[1. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 1. 1. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[ 4 5]
[ 4 10]
[ 5 7]
[ 6 7]
[ 8 9]]
[[0. 1. 0. 0. 0. 0. 1. 0. 0.]
[1. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 1. 1. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[ 4 5]
[ 4 10]
[ 5 7]
[ 6 7]
[ 8 9]]
Hard-coding the adjacency matrices
a = np.array([[0, 1, 0, 0, 0, 0, 1, 0, 0],
[1, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]])
b = np.array([[0, 1, 0, 0, 0, 0, 1, 0, 0,],
[1, 0, 0, 1, 0, 0, 0, 0, 0,],
[0, 0, 0, 1, 0, 0, 0, 0, 0,],
[0, 1, 1, 0, 0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0, 1, 0, 0, 0,],
[0, 0, 0, 0, 1, 0, 0, 0, 0,],
[1, 0, 0, 0, 0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0, 0, 0, 0, 0,]])
g1 = nx.from_numpy_array(a)
g2 = nx.from_numpy_array(b)
yields the following
[[0. 1. 0. 0. 0. 0. 1. 0. 0.]
[1. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 1. 1. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[0 1]
[0 6]
[1 3]
[2 3]
[4 5]]
[[0. 1. 0. 0. 0. 0. 1. 0. 0.]
[1. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 1. 1. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0.]]
[[0 1]
[0 6]
[1 3]
[2 3]
[4 5]]
Can someone point me to why I get such different outputs where the latter seems correct?
I really appreciate any help.
Upvotes: 0
Views: 54
Reputation: 19
Someone posted an answer but deleted it immediately afterward. If you post your answer again, I can accept it as the solution. I'm not completely sure why this issue arises in the first place, must be connected to how NetworkX manages the node labels internally. Adding the following line before retrieving the edges did the trick:
g = nx.convert_node_labels_to_integers(g)
edges = np.array(g.edges(), dtype=np.int32)
Upvotes: 0