Reputation: 4353
I have one graph, defined by 4 matrices: x
(node features), y
(node labels), edge_index
(edges list) and edge_attr
(edge features). I want to create a dataset in Pytorch Geometric with this single graph and perform node-level classification. It seems that just wrapping these 4 matrices into a data
object fails, for some reason.
I have created a dataset containing the attributes:
Data(edge_attr=[3339730, 1], edge_index=[2, 3339730], x=[6911, 50000], y=[6911, 1])
representing a graph. If I try to slice this graph, like:
train_dataset, test_dataset = dataset[:5000], dataset[5000:]
I get the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-feb278180c99> in <module>
3 # train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
4
----> 5 train_dataset, test_dataset = dataset[:5000], dataset[5000:]
6
7 # Create dataloader for training and test dataset.
~/anaconda3/envs/py38/lib/python3.8/site-packages/torch_geometric/data/data.py in __getitem__(self, key)
92 def __getitem__(self, key):
93 r"""Gets the data of the attribute :obj:`key`."""
---> 94 return getattr(self, key, None)
95
96 def __setitem__(self, key, value):
TypeError: getattr(): attribute name must be string
What am I doing wrong in the data construction?
Upvotes: 2
Views: 3947
Reputation: 2896
For node classification:
Create custom dataset.
class CustomDataset(InMemoryDataset):
def __init__(self, root, transform=None, pre_transform=None):
super(CustomDataset, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
return ['edge_list.csv', 'x.pt', 'y.pt', 'edge_attributes.csv']
@property
def processed_file_names(self):
return ['graph.pt']
def process(self):
data_list = []
edge_list = pd.read_csv(self.raw_paths[0], dtype=int)
target_nodes = edge_list.iloc[:,0].values
source_nodes = edge_list.iloc[:,1].values
edge_index = torch.tensor([source_nodes, target_nodes], dtype=torch.int64)
x = torch.load(self.raw_paths[1], map_location=torch.device('cpu'))
y = torch.load(self.raw_paths[2], map_location=torch.device('cpu'))
# make masks
n = x.shape[0]
randomassort = list(range(n))
random.shuffle(randomassort)
max_train = floor(len(randomassort) * .1)
train_mask_idx = torch.tensor(randomassort[:max_train])
test_mask_idx = torch.tensor(randomassort[max_train:])
train_mask = torch.zeros(n); test_mask = torch.zeros(n)
train_mask.scatter_(0, train_mask_idx, 1)
test_mask.scatter_(0, test_mask_idx, 1)
train_mask = train_mask.type(torch.bool)
test_mask = test_mask.type(torch.bool)
edge_attributes = pd.read_csv(self.raw_paths[3])
data = Data(edge_index=edge_index, x=x, y=y, train_mask=train_mask, test_mask=test_mask)
print(data.__dict__)
data, slices = self.collate([data])
torch.save((data, slices), self.processed_paths[0])
Then in the train loop use the masks when updating the model.
def train():
...
model.train()
optimizer.zero_grad()
F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
optimizer.step()
Upvotes: 5
Reputation: 40728
You cannot slice a torch_geometric.data.Data
as its __getitem__
is defined as:
def __getitem__(self, key):
r"""Gets the data of the attribute :obj:`key`."""
return getattr(self, key, None)
So it seems you can't access edges with the __getitem__
. However, since what you are trying to do is split your dataset you could use torch_geometric.utils.train_test_split_edges
. Something like:
torch_geometric.utils.train_test_split_edges(dataset, val_ratio=0.1, test_ratio=0)
It will:
split the edges of a your
Data
object into positive and negative train/val/test edges, and add the following attributes:train_pos_edge_index
,train_neg_adj_mask
,val_pos_edge_index
,val_neg_edge_index
,test_pos_edge_index
, andtest_neg_edge_index
to the returnedData
object.
Upvotes: 1