cupofcalculus
cupofcalculus

Reputation: 51

How to implement a PyTorch NN from a directed graph

I'm new to Pytorch and teaching myself, and I want to create ANNs that takes in a directed graph. I also want to pass predefined weights & biases for each connection into it, but willing to ignore that for now.

My motivation for these conditions is that I'm trying to implement the NEAT algorithm, which is basically using a Genetic Algorithm to evolve the network.

For example, let graph = dict{'1':[[], [4, 7]], '2':[[], [6]], '3':[[], [6]], '4':[[1, 7], []], '5':[[7], []], '6':[[2, 3], [7]], '7':[[1, 6], [4, 5]]} represent the directed graph.

Example graph

My code for what I'm thinking is:

class Net(torch.nn.Module):
    def __init__(self, graph):
        super(Net, self).__init__()
        self.graph = graph
        self.walk_graph()

    def walk_graph(self):
        graph_remaining = copy.deepcopy(self.graph)
        done = False  # Has every node/connection been processed?
        while not done:
            processed = []  # list of tuples, of a node and the nodes it outputs to
            for node_id in graph_remaining.keys():
                if len(graph_remaining[node_id][0]) == 0:  # if current node has no incoming connections
                    try:
                        # if current node has been processed, but waited for others to finish
                        if callable(getattr(self, 'layer{}'.format(node_id))):
                            D_in = len(eval('self.layer{}'.format(node_id)).in_features)
                            D_out = len(eval('self.layer{}'.format(node_id)).out_features)
                            setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(D_in, D_out))
                        cat_list = [] # list of input tensors
                        for i in self.graph[node_id][0]: # search the entire graph for inputs
                            cat_list.append(globals()['out_{}'.format(i)]) # add incoming tensor to list
                        # create concatenated tensor for incoming tensors
                        # I'm not confident about this
                        globals()['in_{}'.format(node_id)] = torch.cat(cat_list, len(cat_list))
                    except AttributeError:  # if the current node hasn't been waiting
                        try:
                            setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(len(self.graph[node_id][0]), len(self.graph[node_id][1])))
                        except ZeroDivisionError:  # Input/Output nodes have zero inputs/outputs in the graph
                            setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(1, 1))
                    globals()['out_{}'.format(node_id)] = getattr(self, 'layer' + node_id)(globals()['in_{}'.format(node_id)])
                    processed.append((node_id, graph_remaining[node_id][1]))

            for node_id, out_list in processed:
                for out_id in out_list:
                    try:
                        graph_remaining[str(out_id)][0].remove(int(node_id))
                    except ValueError:
                        pass
                try:
                    del graph_remaining[node_id]
                except KeyError:
                    pass

            done = True
            for node_id in self.graph.keys():
                if len(graph_remaining[node_id][0]) != 0 or len(graph_remaining[node_id][1]) != 0:
                    done = False
        return None

I'm a little out of my comfort zone on this, but if you have a better idea, or can point out how this is fatally flawed, I'm all ears. I know I'm missing a forward function, and could use some advice about how to restructure.

Upvotes: 1

Views: 482

Answers (1)

luk_dev
luk_dev

Reputation: 154

Since you don't plan on doing any actual training of the network, PyTorch might not be your best option in this case.

NEAT is about recombining and mutating neural networks - both their structure and their weights and biases - and thereby achieving better results. PyTorch generally is a deep learning framework, meaning that you define the structure (or architecture) of your network and then use algorithms like stochastic gradient descent to update the weights and biases in order to improve your performance. As a consequence of this, PyTorch works based on modules and submodules of neural networks, like fully connected layers, convolutional layers and so on.

The problem with this discrepancy is that NEAT not only requires you to store a lot more information (like their ID for recombination etc.) about the individual nodes than PyTorch supports, it also doesn't fit in very well with the "layer-wise" approach of deep learning frameworks.

In my opinion, you will be better off implementing the forward pass through the network yourself. If you're unsure how to do that, this video gives a very good explanation.

Upvotes: 1

Related Questions