exelents
exelents

Reputation: 93

pytorch model saved from TPU run on CPU

I found interesting model - question generator, but can't run it. I got an error:

Traceback (most recent call last):
  File "qg.py", line 5, in <module>
    model = AutoModelWithLMHead.from_pretrained("/home/user/ml-experiments/gamesgen/t5-base-finetuned-question-generation-ap/")
  File "/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_auto.py", line 806, in from_pretrained
    return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
  File "/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py", line 798, in from_pretrained
    import torch_xla.core.xla_model as xm
ModuleNotFoundError: No module named 'torch_xla'

I briefly googled and found that "torch_xla" is a something that is used to train pytorch model on TPU. But I would like to run it localy on cpu (for inference, of course) and got this error when pytorch tried to load tpu-bound tensors. How can I fix it?

this is model, which I tried: https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap

Upvotes: 1

Views: 439

Answers (1)

exelents
exelents

Reputation: 93

As @cronoik suggested, I have installed transformers library form github. I clonned latest version, and executed python3 setup.py install in it's directory. This bug was fixed, but fix still not released in python's packets repository.

Upvotes: 1

Related Questions