Reputation: 41
Glm uses float internally and evaluates values with very high precision. The problem is that regular operations with such values can return incorrect results. Consider this gdb output:
49 return glm::normalize(coord - center);
(gdb) p coord
$1 = {{x = -1.30530175e+38, r = -1.30530175e+38, s = -1.30530175e+38}, {y = -2.65845583e+36, g = -2.65845583e+36,
t = -2.65845583e+36}, {z = 3.40282347e+38, b = 3.40282347e+38, p = 3.40282347e+38}}
(gdb) p center
$2 = {{x = -2, r = -2, s = -2}, {y = 0, g = 0, t = 0}, {z = 8, b = 8, p = 8}}
(gdb)
We have the coord
vector the values of which have lots of decimal spaces and the center
vector the values of which don't have any decimal places at all. The result of glm::normalize
over these vectors is as follows:
Value returned is $14 = {{x = -0, r = -0, s = -0}, {y = -0, g = -0, t = -0}, {z = 0, b = 0, p = 0}}
It's clear that the normal vector can't be a zero vector. And this is the place inside glm library where everything goes wrong:
710 v1.y * v2.y,
(gdb) p v1.y * v2.y
$6 = inf
(gdb) p v1.y
$7 = -2.65845583e+36
(gdb) p v2.y
$8 = -2.65845583e+36
(gdb)
This seems to be an overflow, but I'm not sure. If it is, is there a way to force glm to use double
instead of float
or is there an other way to fix this?
Upvotes: -1
Views: 124
Reputation: 41
Like Evg supports Moderator Strike mentioned, in a case like this we need to normalize large values to avoid overflow
Upvotes: 0