Reputation: 19
I am unable to find out how to compute the test/validation loss, so that I could make a graph depicting the evolution of training and test/validation loss over epochs.
I have tried the pipeline approach, i.e. for TransE
result = pipeline(
dataset = PharmKG(
random_state = 0,
cache_root = f'{project_path}/data/raw/pykeen'
),
model = 'TransE',
training_loop = 'sLCWA',
negative_sampler = 'basic',
epochs = 100,
training_kwargs = dict(batch_size = 8192),
stopper = 'EarlyStopper', #stoppers['TransE'],
evaluator = 'RankBasedEvaluator',
device = 'cuda',
random_seed = 0,
)
as well as more direct approach
dataset = PharmKG(
random_state = 0,
cache_root = f'{project_path}/data/raw/pykeen'
)
models_dict = {
'TransE': TransE,
'ConvKB': ConvKB,
'ComplEx': ComplEx
}
models = {}
optimizers = {}
training_loops = {}
loss = {}
for model_name, model_class in models_dict.items():
models[model_name] = model_class(triples_factory=dataset.training, random_seed=0).to('cuda') # Must be explicitely directed to GPU
optimizers[model_name] = Adam(params = models[model_name].get_grad_params())
training_loops[model_name] = SLCWATrainingLoop(
triples_factory = triples_factory,
model = models[model_name],
optimizer = optimizers[model_name]
)
loss[model_name] = training_loops[model_name].train(
triples_factory = triples_factory,
num_epochs = 30,
batch_size = 8192,
)
Upvotes: 0
Views: 39