Reputation: 44
I am trying to find the chord of the single instrument played how can I find it
private AnalyzedSound getFrequency() {
//audioData has the audio array of recored.
elementsRead =
audioData.getElements(audioDataAnalyzis,0,audioDataSize);
double loudness = 0.0;
for(int i=0; i<elementsRead; ++i)
loudness+=Math.abs(audioDataAnalyzis[i]);
//loudness of the data is divided by the elementsRead
loudness/=elementsRead;
// Check loudness first - it's root of all evil. loudnessThreshold = 30.0
if(loudness<loudnessThreshold)
return new AnalyzedSound(loudness,ReadingType.TOO_QUIET);
//FFT computation analyzed data is in audioDataAnalyzis
computeAutocorrelation();
//chopOffEdges(0.2);
double maximum=0;
for(int i=1; i<elementsRead; ++i)
maximum = Math.max(audioDataAnalyzis[i], maximum);
int lastStart = -1;
wavelengths = 0;
boolean passedZero = true;
for(int i=0; i<elementsRead; ++i) {
if(audioDataAnalyzis[i]*audioDataAnalyzis[i+1] <=0) passedZero = true;
if(passedZero && audioDataAnalyzis[i] > MPM*maximum &&
audioDataAnalyzis[i] > audioDataAnalyzis[i+1]) {
if(lastStart != -1)
wavelength[wavelengths++]=i-lastStart;
lastStart=i; passedZero = false;
maximum = audioDataAnalyzis[i];
}
}
if(wavelengths <2)
return new AnalyzedSound(loudness,ReadingType.ZERO_SAMPLES);
removeFalseSamples();
double mean = getMeanWavelength(), stdv=getStDevOnWavelength();
double calculatedFrequency = (double)AUDIO_SAMPLING_RATE/mean;
//Log.d(TAG, "MEAN: " + mean + " STDV: " + stdv);
//Log.d(TAG, "Frequency:" + calculatedFrequency);
if(stdv >= maxStDevOfMeanFrequency)
return new AnalyzedSound(loudness,ReadingType.BIG_VARIANCE);
else if(calculatedFrequency>MaxPossibleFrequency)
return new AnalyzedSound(loudness,ReadingType.BIG_FREQUENCY);
else
return new AnalyzedSound(loudness, calculatedFrequency);
}
But I am not able to find the mean value. This code is working fine for real time recording but I would like to analyse a saved wav file. I am not able to get the mean value correctly. How can I split the audio data and give as input
Upvotes: 0
Views: 124
Reputation: 9159
What you appear to be trying to achieve is polyphonic pitch detection. It is a hard and at best an approximate process, and difficult to make work in the general case.
The fundamental problem you will face is superposition of the harmonics on any real instrument sound. At a fundamental level, consonance works because harmonics from the constituent notes of a chord align. These may result in spectral peak at a frequencies which are not the fundamental of one of the notes in the chord.
Additional problems you need to deal with are instruments whose spectral peak is not their fundamental, percussive sounds at the start of noted - whose spectra looks more like noise.
It is likely that a statistical approach is required to unpick superposition of notes, possibly with some a priori knowledge of the spectral characteristics of the instrument in question.
/FFT computation analyzed data is in audioDataAnalyzis
computeAutocorrelation();
I was a bit confused by this comment. Autocorrelation is not the FFT. In general, autocorrelation is considered a poor choice of algorithm for frequency detection on real world audio signals.
For polyphonic pitch detection you may be better of using an FFT approach - however - this is not particularly straightforward either.
Finally, there is a large body of research into this problem space - usually in the context of audio feature extraction. I suggest having a look at Sonic Visualiser - for which open-source pitch analysis plug-ins exist.
Upvotes: 2