Reputation: 10033
I have 4 functions for some statistical calculations in complex networks analysis.
import networkx as nx
import numpy as np
import math
from astropy.io import fits
Degree distribution of graph:
def degree_distribution(G):
vk = dict(G.degree())
vk = list(vk.values()) # we get only the degree values
maxk = np.max(vk)
mink = np.min(min)
kvalues= np.arange(0,maxk+1) # possible values of k
Pk = np.zeros(maxk+1) # P(k)
for k in vk:
Pk[k] = Pk[k] + 1
Pk = Pk/sum(Pk) # the sum of the elements of P(k) must to be equal to one
return kvalues,Pk
Community detection of graph:
def calculate_community_modularity(graph):
communities = greedy_modularity_communities(graph) # algorithm
modularity_dict = {} # Create a blank dictionary
for i,c in enumerate(communities): # Loop through the list of communities, keeping track of the number for the community
for name in c: # Loop through each neuron in a community
modularity_dict[name] = i # Create an entry in the dictionary for the neuron, where the value is which group they belong to.
nx.set_node_attributes(graph, modularity_dict, 'modularity')
print (graph_name)
for i,c in enumerate(communities): # Loop through the list of communities
#if len(c) > 2: # Filter out modularity classes with 2 or fewer nodes
print('Class '+str(i)+':', len(c)) # Print out the classes and their member numbers
return modularity_dict
Modularity score of graph:
def modularity_score(graph):
return nx_comm.modularity(graph, nx_comm.label_propagation_communities(graph))
and finally graph Entropy:
def shannon_entropy(G):
k,Pk = degree_distribution(G)
H = 0
for p in Pk:
if(p > 0):
H = H - p*math.log(p, 2)
return H
What I would like to achieve now is find local entropy for each community (turned into a subgraph), with preserved edges information.
Is this possible? How so?
Matrix being used is in this link:
with fits.open('mind_dataset/matrix_CEREBELLUM_large.fits') as data:
matrix = pd.DataFrame(data[0].data.byteswap().newbyteorder())
and then turn the adjacency matrix into a graph, 'graph', or 'G' like so:
def matrix_to_graph(matrix):
from_matrix = matrix.copy()
to_numpy = from_matrix.to_numpy()
G = nx.from_numpy_matrix(to_numpy)
return G
Based on the proposed answer below I have created another function:
def community_entropy(modularity_dict):
communities = {}
#create communities as lists of nodes
for node, community in modularity_dict.items():
if community not in communities.keys():
communities[community] = [node]
else:
communities[community].append(node)
print(communities)
#transform lists of nodes to actual subgraphs
for subgraph, community in communities.items():
communities[community] = nx.Graph.subgraph(subgraph)
local_entropy = {}
for subgraph, community in communities.items():
local_entropy[community] = shannon_entropy(subgraph)
return local_entropy
and:
cerebellum_graph = matrix_to_graph(matrix)
modularity_dict_cereb = calculate_community_modularity(cerebellum_graph)
community_entropy_cereb = community_entropy(modularity_dict_cereb)
But it throws the error:
TypeError: subgraph() missing 1 required positional argument: 'nodes'
Upvotes: 5
Views: 1489
Reputation: 4695
Using the code I provided as an answer to your question here to create graphs from communities. You can first create different graphs for each of your communities (based on the community edge attribute of your graph). You can then compute the entropy for each community with your shannon_entropy
and degree_distribution
function.
See code below based on the karate club example you provided in your other question referenced above:
import networkx as nx
import networkx.algorithms.community as nx_comm
import matplotlib.pyplot as plt
import numpy as np
import math
def degree_distribution(G):
vk = dict(G.degree())
vk = list(vk.values()) # we get only the degree values
maxk = np.max(vk)
mink = np.min(min)
kvalues= np.arange(0,maxk+1) # possible values of k
Pk = np.zeros(maxk+1) # P(k)
for k in vk:
Pk[k] = Pk[k] + 1
Pk = Pk/sum(Pk) # the sum of the elements of P(k) must to be equal to one
return kvalues,Pk
def shannon_entropy(G):
k,Pk = degree_distribution(G)
H = 0
for p in Pk:
if(p > 0):
H = H - p*math.log(p, 2)
return H
G = nx.karate_club_graph()
# Find the communities
communities = sorted(nx_comm.greedy_modularity_communities(G), key=len, reverse=True)
# Count the communities
print(f"The club has {len(communities)} communities.")
'''Add community to node attributes'''
for c, v_c in enumerate(communities):
for v in v_c:
# Add 1 to save 0 for external edges
G.nodes[v]['community'] = c + 1
'''Find internal edges and add their community to their attributes'''
for v, w, in G.edges:
if G.nodes[v]['community'] == G.nodes[w]['community']:
# Internal edge, mark with community
G.edges[v, w]['community'] = G.nodes[v]['community']
else:
# External edge, mark as 0
G.edges[v, w]['community'] = 0
N_coms=len(communities)
edges_coms=[]#edge list for each community
coms_G=[nx.Graph() for _ in range(N_coms)] #community graphs
colors=['tab:blue','tab:orange','tab:green']
fig=plt.figure(figsize=(12,5))
for i in range(N_coms):
edges_coms.append([(u,v,d) for u,v,d in G.edges(data=True) if d['community'] == i+1])#identify edges of interest using the edge attribute
coms_G[i].add_edges_from(edges_coms[i]) #add edges
ent_coms=[shannon_entropy(coms_G[i]) for i in range(N_coms)] #Compute entropy
for i in range(N_coms):
plt.subplot(1,3,i+1)#plot communities
plt.title('Community '+str(i+1)+ ', entropy: '+str(np.round(ent_coms[i],1)))
pos=nx.circular_layout(coms_G[i])
nx.draw(coms_G[i],pos=pos,with_labels=True,node_color=colors[i])
And the output gives:
Upvotes: 3
Reputation: 1202
It looks like, in calculate_community_modularity
, you use greedy_modularity_communities
to create a dict, modularity_dict
, which maps a node in your graph to a community
. If I understand correctly, you can take each subgraph community in modularity_dict
and pass it into shannon_entropy
to calculate the entropy for that community.
this is pseudo code, so there may be some errors. This should convey the principle, though.
after running calculate_community_modularity
, you have a
dict like this, where the key is each node, and the value is that which the community belongs to
modularity_dict = {node_1: community_1, node_2: community_1, node_3: community_2}
I've never used nx
, but it looks like you can extract a subgraph based on a list of nodes. So you would iterate through your dict, and create a list of nodes for each community. then you would use that list of nodes to extract the actual nx
subgraph for that community.
communities = {}
#create communities as lists of nodes
for node, community in modularity_dict.iteritems():
if community not in communities.keys():
communities[community] = [node]
else:
communities[community].append(node)
#transform lists of nodes to actual subgraphs
for subgraph, community in communities.iteritems():
communities[community] = networkx.Graph.subgraph(subgraph)
now that communities
is a dict with key of the community id, and a value of the nx
subgraph which defines that community, you should be able to run those subgraphs through shannon_entropy
, as the type of the subgraphs is the same as the type of your original graph
local_entropy = {}
for subgraph, community in communities.iteritems():
local_entropy[community] = shannon_entropy(subgraph)
Upvotes: 0