DevKnight2001
DevKnight2001

Reputation: 35

How do you specify the bfloat16 mixed precision with the Intel Extension for PyTorch?

I would like to know how to use mixed precision with PyTorch and Intel Extension for PyTorch.

I have tried to look at the documentation on their GitHub, but I can't find anything that specifies how to go from fp32 to blfoat16.

Upvotes: 1

Views: 658

Answers (1)

Eduardo Alvarez
Eduardo Alvarez

Reputation: 166

The IPEX GitHub might not be the best place to look for API documentation. I would try and use the PyTorch IPEX page, which includes examples of API applications.

This would be an example of how to use fp32

model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32)

This would be an example of how to use bfloat16

model, optimizer = ipex.optimize(model, optimizer, dtype=torch.bfloat16)

Upvotes: 0

Related Questions