Mohamed AbdElrahman
Mohamed AbdElrahman

Reputation: 39

Speed of Floating Point Arithmetic in Julia

I found the speed of Float32 matrices in matrix-vector products faster by two times than Float64 matrices. I tried to decrease the precision more to Float16 hoping I will get more speed, but the performance was a lot worse than Float64.

Upvotes: 1

Views: 574

Answers (1)

Oscar Smith
Oscar Smith

Reputation: 6398

Currently Julia does most Float16 operations by converting to Float32 and then converting back. This also means that it can't use BLAS doesn't get used for matrix operations and instead generic fallbacks are used. That said, I think Float16 might produce efficient code on GPU with Julia.

Upvotes: 2

Related Questions