Juliusz Tarnowski
Juliusz Tarnowski

Reputation: 23

Limitated accuracy of numpy.fft below a certain value

I'm trying to calculate Fourier Transform of some signals in Python. I want the result calculated by Fast Fourier Transform to coincide with the result calculated from definition. However, the result calculated using numpy.fft deviates from the expected value.

The signal does not reach a value below a certain number. In the graph below it is about 10^-16. For other signals these are comparable values (from 10^-9 to 10^-30). In my application I need higher accuracy. results plotted on a graph

Just to be sure I also tested scipy.fftpack. The same error appears there, although the incorrectly calculated values are slightly different. The problem does not depend on the signal parameters (length, sampling frequency, etc.).

What is the reason of this limitation? If it's Python/Numpy accuracy how can I improve it?

# Fourier Transform

import numpy as np
import scipy.fftpack as fp

def gaussian_distribution(x,mean=0,variance=1):
    return (1 / (np.sqrt(2*np.pi)*variance) ) * np.exp( -((x-mean)**2) / (2 * variance**2) )

def gaussian_fourier_transform(omega, mean=0, variance=1):
    # http://mathworld.wolfram.com/FourierTransformGaussian.html
    return np.exp(-2 * np.pi**2 * variance**2 * omega**2) * np.exp(-(2j)*np.pi*mean*omega)

## signal generation
signal_range = [-2**4, 2**4]
N = 2**8
x = np.linspace(signal_range[0],signal_range[1],N, dtype='float64')
y = gaussian_distribution(x)

## calculating result
framerate = N / (signal_range[1] - signal_range[0])
frequency_axis = np.linspace(0,framerate,N)

numpy_v = np.abs( np.fft.fft(y) )
numpy_v = numpy_v / numpy_v[0] # normalization

scipy_v = np.abs( fp.fft(y) )
scipy_v = scipy_v / scipy_v[0]

symbolical_v = gaussian_fourier_transform(frequency_axis)

# ploting

import matplotlib.lines as mlines
import matplotlib.pyplot as plt

fig = plt.figure()
ax1 = fig.add_subplot()

ax1.plot(frequency_axis[0: N//2], scipy_v[0: N//2], '.r')
ax1.plot(frequency_axis[0: N//2], numpy_v[0: N//2], '.b')
ax1.plot(frequency_axis[0: N//2], symbolical_v[0: N//2], 'g')
ax1.set_yscale('log')
ax1.grid(True)

blue_line = mlines.Line2D([], [], color='blue', marker='.', markersize=15, label='result calculated by numpy.fft')
red_line = mlines.Line2D([], [], color='red', marker='.', markersize=15, label='result calculated by scipy.fftpack')
green_line = mlines.Line2D([], [], color='green', marker='', markersize=15, label='result calculated by definition')
ax1.legend(handles=[blue_line, red_line, green_line])

fig.show()

Upvotes: 1

Views: 1712

Answers (3)

oli
oli

Reputation: 768

(Similar to what others already said) Every point of the discrete Fourier transform of a function depends on every point of the initial function, meaning the relative error scales with the largest magnitude value of the input data and is global, in a sense. Most common and like you observed Fourier transforms with long and small tails have large errors in their tails. There is no way around this if your precision is fixed and if you can't do (partly) analytic calculations, but you can use arbitrary precision libraries to improve your precision.

But if doing convolutions is your final goal, you might achieve local relative error up to machine precision with a Fast Multipole Method. If and how this works depends on your kernel function.

Upvotes: 0

hotpaw2
hotpaw2

Reputation: 70733

IEEE double precision floating point numbers (what your computer's CPU likely supports in hardware) have roughly 15 decimal digits of precision. This is due to having only 53 bits of mantissa (or significand). The FFT algorithm grows this error bound (or quantization noise) by O(N*Log(N)), where N is the FFT length.

So, to get more precision (a lower noise floor), you may have to find or write your own FFT that internally uses quad-precision or arbitrary precision arithmetic, and acquire and input your data in that format as well.

For instance, you could try coding your FFT using python's mpmath package, and choose your precision.

Upvotes: 2

norok2
norok2

Reputation: 26896

It is a numerical issue. Both NumPy and SciPy use different variants of PocketFFT which is based upon FFTPack, which belongs to the category of exact FFT whose error depends on the machine error ε and the number of samples N. I am not sure what is the exact dependency for these libraries but you could try pyFFTW which are Python bindings to FFTW and those might have a slightly better dependence over machine error.

Upvotes: 0

Related Questions