Reputation:
I am attempting to filter a list of 16-bit two's-complement integer using a Butterworth filter generated using scipy.signal.butter and scipy.signal.lfilter (the list was extracted from a PCM-encoded .wav file). My primary language is C, however the majority of this project already exists in Python, and so I need to code this feature within the existing Python framework.
The butterworth filters produced by scipy.signal (as far as I'm aware) are unity gain, and based upon the plot of the frequency response of the filter, that should be the case. I designed the filter using butter() and filtered my dataset using:
data.l = lfilter(b, a, data.l)
where data.l is a class containing an list 'l' containing PCM data, and 'b' and 'a' are the coefficients produced by butter().
However, regardless of the PCM data that I send in, the filter appears to be applying non-unity gain.
For example, filtering full-bandwidth pink noise with max(data.l) = 32,767
, and min(data.l) = -32,768
before filtering (as expected for a 16-bit PCM signal) returns a signal with approximately 5% increased gain in the passband. i.e max(data.l) = 34,319.0057
and min(data.l) = -37,593
.
The filter appears to be correctly filtering the signal apart from the gain; if I save this PCM data back into a .wav file, and compare a spectrogram of the data to the original signal, the frequency response is exactly as would be expected from my test filters. It seems to be functioning perfectly except for the odd increase in gain?
Obviously I can just rescale this output down to fit into my 16-bit PCM dataset, however I am writing this as part of a wider set of signal processing modules that are designed to be flexible and eventually include non-unity gain filters. For this reason, I want to attempt to figure out why this gain is being applied so as to potentially fix the issue, and not be arbitrarily rescaling the output of my butter() filter.
Does anyone with experience with scipy.signal have an idea as to why this may be the case?
Upvotes: 0
Views: 2448
Reputation: 697
Concerning your second question: If you remove the overshoots, you'll either cause distortions (clipping) or you'll end up with a different frequency response as impulse / step and frequency response are chained together by the Laplace transform.
By filtering, you change your samples, so the concept of "preserving signal level" is questionable in my opinion - the level of a stationary (e.g. sinusoidal) signal in the passband should remain more or less the same as a Butterworth filter has no ripple in the passband, but components in the stop band (transients -> high frequency) are changed of course.
You could try using a filter with Bessel characteristics that has no overshoot (at the cost of a more gentle slope between pass and stop band) if you want to avoid the rescaling.
Upvotes: 0
Reputation: 114946
It is normal for a discrete time Butterworth filter to have ringing transients, and these can overshoot the bounds of the input. Take a look at the step response of your filter (as opposed to the frequency response, which is a steady state calculation).
For example, the following code
In [638]: from scipy.signal import butter, lfilter, freqz
In [639]: b, a = butter(3, 0.2)
In [640]: step_response = lfilter(b, a, np.ones(50))
In [641]: plot(step_response)
Out[641]: [<matplotlib.lines.Line2D at 0x10ecb2850>]
generates the plot
Note the overshoot.
The frequency response shows the expected gains for a (discrete time) Butterworth filter.
In [642]: w, h = freqz(b, a, worN=1000)
In [643]: plot(w/np.pi, np.abs(h))
Out[643]: [<matplotlib.lines.Line2D at 0x10f7a90d0>]
See also: http://www.dspguide.com/ch20/3.htm
Upvotes: 2