lollercoaster
lollercoaster

Reputation: 16503

computing spectrograms of wav files & recorded sound (normalizing for volume)

I want to compare recorded audio with audio read from disk in a consistent way, but I'm running into problems with normalization for volume (otherwise amplitudes of spectrograms are different).

I also have never worked with signals, FFTs, or the WAV format ever before so this is new, uncharted territory for me. I retrieve channels as lists of signed 16bit ints sampled at 44100 Hz from both

  1. on disk .wav files
  2. recorded music playing from my laptop

and then I proceed through each with a window (2^k) with a certain amount of overlap. For each window like so:

# calculate window variables
window_step_size = int(self.window_size * (1.0 - self.window_overlap_ratio)) + 1
last_frame = nframes - window_step_size # nframes is total number of frames from audio source
num_windows, i = 0, 0 # calculate number of windows
while i <= last_frame: 
    num_windows += 1
    i += window_step_size

# allocate memory and initialize counter
wi = 0 # index
nfft = 2 ** self.nextpowof2(self.window_size) # size of FFT in 2^k
fft2D = np.zeros((nfft/2 + 1, num_windows), dtype='c16') # 2d array for storing results

# for each window
count = 0
times = np.zeros((1, num_windows)) # num_windows was calculated

while wi <= last_frame:

    # channel_samples is simply list of signed ints
    window_samples = channel_samples[ wi : (wi + self.window_size)]
    window_samples = np.hamming(len(window_samples)) * window_samples 

    # calculate and reformat [[[[ THIS IS WHERE I'M UNSURE ]]]]
    fft = 2 * np.fft.rfft(window_samples, n=nfft) / nfft
    fft[0] = 0 # apparently these are completely real and should not be used
    fft[nfft/2] = 0 
    fft = np.sqrt(np.square(fft) / np.mean(fft)) # use RMS of data
    fft2D[:, count] = 10 * np.log10(np.absolute(fft))

    # sec / frame * frames = secs
    # get midpt
    times[0, count] = self.dt * wi

    wi += window_step_size
    count += 1

# remove NaNs, infs
whereAreNaNs = np.isnan(fft2D);
fft2D[whereAreNaNs] = 0;
whereAreInfs = np.isinf(fft2D);
fft2D[whereAreInfs] = 0;

# find the spectorgram peaks
fft2D = fft2D.astype(np.float32)

# the get_2D_peaks() method discretizes the fft2D periodogram array and then
# finds peaks and filters out those peaks below the threshold supplied
# 
# the `amp_xxxx` variables are used for discretizing amplitude and the 
# times array above is used to discretize the time into buckets
local_maxima = self.get_2D_peaks(fft2D, self.amp_threshold, self.amp_max, self.amp_min, self.amp_step_size, times, self.dt)

In particular, the crazy stuff (to me at least) happens on the line with my comment [[[[ THIS IS WHERE I'M UNSURE ]]]].

Can anyone point me in the right direction or help me to generate this audio spectrogram while normalizing for volume correctly?

Upvotes: 2

Views: 1708

Answers (1)

ederwander
ederwander

Reputation: 3478

A quick look tells me that you forgot to use a window, it is necessary to calculate your Spectrogram .

You need to use one Window (hamming, hann) in your "window_samples"

np.hamming(len(window_samples)) * window_samples

Then you can calculate rfft.

Edit:

#calc magnetitude from FFT
fftData=fft(windowed);
#Get Magnitude (linear scale) of first half values
Mag=abs(fftData(1:Chunk/2))
#if you want log scale R=20 * np.log10(Mag)
plot(Mag)

#calc RMS from FFT
RMS = np.sqrt( (np.sum(np.abs(np.fft(data)**2) / len(data))) / (len(data) / 2) )

RMStoDb = 20 * log10(RMS)

PS: If you want calculate RMS from FFT you cant use Window(Hann, Hamming), this line makes no sense:

fft = np.sqrt(np.square(fft) / np.mean(fft)) # use RMS of data

One simple normalization data can be done for each window:

window_samples = channel_samples[ wi : (wi + self.window_size)]

#framMax=np.max(window_samples);
framMean=np.mean(window_samples);

Normalized=window_samples/framMean;

Upvotes: 1

Related Questions