Adam Hughes
Adam Hughes

Reputation: 16309

Realistic float value for "about zero"

I'm working on a program with fairly complex numerics, mostly in numpy with complex datatypes. Some of the calculation are returning nearly empty arrays with a complex component that is almost zero. For example:

(2 + 0j, 3+0j, 4+3.9320340202e-16j)

Clearly the third component is basically 0, but for whatever reason, this is the output of my calculation and it turns out that for some of these nearly zero values, np.is_complex() returns True. Rather than dig through that big code, I think it's sensible to just apply a cutoff. My question is, what is a sensible cutoff that anything below should be considered a zero? 0.00? 0.000000? etc...

I understand that these values are due to rounding errors in floating point math, and just want to handle them sensibly. What is the tolerance/range one allows for such precision error? I'd like to set it to a parameter:

ABOUTZERO=0.000001

Upvotes: 0

Views: 736

Answers (1)

ali_m
ali_m

Reputation: 74232

As others have commented, what constitutes 'almost zero' really does depend on your particular application, and how large you expect the rounding errors to be.

If you must use a hard threshold, a sensible value might be the machine epsilon, which is defined as the upper bound on the relative error due to rounding for floating point operations. Intuitively, it is the smallest positive number that, when added to 1.0, gives a result >1.0 using a given floating point representation and rounding method.

In numpy, you can get the machine epsilon for a particular float type using np.finfo:

import numpy as np

print(np.finfo(float).eps)
# 2.22044604925e-16
print(np.finfo(np.float32).eps)
# 1.19209e-07

Upvotes: 1

Related Questions