Han
Han

Reputation: 21

About tqdm in deep learning

I was using tqdm to show the process when running the code. However, i failed and the result on the console keep the same the consle view

here is my code:

    for epoch in range(epoch_num):
        print("Training epoch{}".format(epoch + 1))
        pbar = tqdm(train_dataloader)
        for step, batch in enumerate(pbar):
            if step == 5:
                torch.save(model.state_dict(), os.path.join(save_path, 'best_param.bin'))
                print(" Model Saved")
                print("Stopped Early")
                break
            model.train()
            inputs = {
                'input_ids': batch[0],
                'attention_mask': batch[1],
                'token_type_ids': batch[2],
                'labels': batch[3]
            }
            outputs = model(**inputs)
            loss, results = outputs
            optimizer.zero_grad()  # ??
            loss.backward()
            optimizer.step()
            loss_list.append(loss.item())
        pbar.set_description('Batch loss{:.3f}'.format(loss.item()))

Upvotes: 1

Views: 3554

Answers (1)

Mina Abd El-Massih
Mina Abd El-Massih

Reputation: 656

You have the pbar variable which is responsible for the loop progress bar defined within the loop, also you are not updating it, so what you are doing is that each iteration you are basically recreating a progress bar that is at 0%

What you should do is have the tqdm track the progress of the epochs in the for loop line like this:

for epoch in tqdm(range(epoch_num)):

This way it takes an iterable and iterates over it and creates the progress bar according to it.

Also make sure you are importing tqdm like this:

from tqdm import tqdm

Upvotes: 2

Related Questions