Reputation: 57
I am trying out one huggingface sample with SWAG dataset https://github.com/huggingface/transformers/tree/master/examples/pytorch/multiple-choice
I would like to use Intel Extension for Pytorch in my code to increase the performance.
Here I am using the one without training (run_swag_no_trainer)
In the run_swag_no_trainer.py , I made some changes to use ipex . #Code before changing is given below:
device = accelerator.device
model.to(device)
#After adding ipex:
import intel_pytorch_extension as ipex
device = ipex.DEVICE
model.to(device)
While running the below command, its taking too much time.
export DATASET_NAME=swag
accelerate launch run_swag_no_trainer.py \
--model_name_or_path bert-base-cased \
--dataset_name $DATASET_NAME \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$DATASET_NAME/
Is there any other method to test the same on intel ipex?
Upvotes: 2
Views: 543
Reputation: 862
First you have to understand, which factors actually increases the running time. Following are these factors:
For fast running, make sure to work on the above factors, like:
Upvotes: 2