riskiem
riskiem

Reputation: 307

Graph Neural Network Regression

I am trying to implement a regression on a Graph Neural Network. Most of the examples that I see are that of classification in this area, none so far of regression. I saw one for classification as follows: from torch_geometric.nn import GCNConv

class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
    super(GCN, self).__init__()
    torch.manual_seed(12345)
    self.conv1 = GCNConv(dataset.num_features, hidden_channels)
    self.conv2 = GCNConv(hidden_channels, dataset.num_classes)

def forward(self, x, edge_index):
    x = self.conv1(x, edge_index)
    x = x.relu()
    x = F.dropout(x, p=0.5, training=self.training)
    x = self.conv2(x, edge_index)
    return x

model = GCN(hidden_channels=16)
print(model)

I am trying to modify it for my task, which basically includes performing a regression on a network with 30 nodes, each having 3 features and the edge has one feature.

If anyone could point me to examples to do the same, that would be very helpful.

Upvotes: 4

Views: 5824

Answers (2)

Fahad Rahman Amik
Fahad Rahman Amik

Reputation: 1

If you are looking for a graph-level regression task, then you have to add a linear layer at the end without any activation function. This is how you can modify the example that you gave. Currently, I am also dealing with a graph-level regression task and here is my code. I hope this helps.

class GNN(torch.nn.Module):
    def __init__(self, hidden_channels):
        super(GNN, self).__init__()
        
        # Multiply hidden_channels to scale up the network size
        hidden_channels_gcn = hidden_channels * 2
        hidden_channels_gat = hidden_channels_gcn * 2 
        hidden_channels_gin = hidden_channels_gat * 2  

        # Initialize the first GCNConv layer in the forward method
        self.conv1 = None
        self.hidden_channels_gcn = hidden_channels_gcn
        
        self.conv2 = GCNConv(hidden_channels_gcn, hidden_channels_gcn)
        self.gat_conv1 = GATConv(hidden_channels_gcn, hidden_channels_gat)
        self.gat_conv2 = GATConv(hidden_channels_gat, hidden_channels_gat)

        mlp = torch.nn.Sequential(
            Linear(hidden_channels_gat, hidden_channels_gin),
            ReLU(),
            Linear(hidden_channels_gin, hidden_channels_gin)
        )
        self.gin_conv1 = GINConv(mlp)
        self.out = Linear(hidden_channels_gin, 1)

    def forward(self, x, edge_index, batch):
        # Dynamically create the first GCNConv layer based on the input feature size
        if self.conv1 is None:
            self.conv1 = GCNConv(x.size(1), self.hidden_channels_gcn).to(x.device)

        x = F.relu(self.conv1(x, edge_index))
        x = F.relu(self.conv2(x, edge_index))
        x = F.relu(self.gat_conv1(x, edge_index))
        x = F.relu(self.gat_conv2(x, edge_index))
        x = F.relu(self.gin_conv1(x, edge_index))
        x = global_mean_pool(x, batch)
        x = self.out(x)
        
        return x

Upvotes: 0

crowntail lin
crowntail lin

Reputation: 31

add a linear layer,and don't forget use a regression loss function

class GCN(torch.nn.Module):
    def __init__(self, hidden_channels):
        super(GCN, self).__init__()
        torch.manual_seed(12345)
        self.conv1 = GCNConv(dataset.num_features, hidden_channels)
        self.conv2 = GCNConv(hidden_channels, dataset.num_classes)
        self.linear1 = torch.nn.Linear(100,1)
    def forward(self, x, edge_index):
        x = self.conv1(x, edge_index)
        x = x.relu()
        x = F.dropout(x, p=0.5, training=self.training)
        x = self.conv2(x, edge_index)
        x = self.linear1(x)
        return x

Upvotes: 3

Related Questions