Florian
Florian

Reputation: 55

Implementing a high pass audio filter in android

I'm trying to implement an high pass audio filter on the microphone data that I get form the audioRecord.

The data I get form the microphone is a 16-bit PCM audio byte-array. I was trying to use TarsosDSP which provides a API for high pass filtering. However, as an input it requires a float-array so I converted the byte into a float array and ran the highpass filter. To confirm the results I saved the filtered data in a wave file but it sounds totally distorted.

public static byte[] highPassFilter( byte[] buffer, WaveHeader waveHeader, float frequency) {

    HighPass highPass = new HighPass(frequency, waveHeader.getSampleRate());

    TarsosDSPAudioFormat format = new TarsosDSPAudioFormat(waveHeader.getSampleRate(),waveHeader.getBitsPerSample(),waveHeader.getChannels(),true, false);
    AudioEvent audioEvent = new AudioEvent(format);
    float[] f_buffer = bytesToFloats(buffer);
    audioEvent.setFloatBuffer(f_buffer);

    highPass.process(audioEvent);
    buffer = audioEvent.getByteBuffer();

    byte[] data = PCMtoWav(buffer, waveHeader.getSampleRate(),     waveHeader.getChannels(), waveHeader.getBitsPerSample());
    writeWavFile(data);

    return buffer;
}

public static float[] bytesToFloats(byte[] bytes) {
    float[] floats = new float[bytes.length / 2];
    for(int i=0; i < bytes.length; i+=2) {
        floats[i/2] = bytes[i] | (bytes[i+1] < 128 ? (bytes[i+1] << 8) : ((bytes[i+1] - 256) << 8));
    }
    return floats;
}

The data in the waveHeader is: Sample rate = 11025 getBitsPerSample = 16 getChannels = 1

My best guess is that the bytesToFloats conversion is wrong. To verify this I just set the float buffer of the audioEvent with audioEvent.setFloatBuffer and then retrieved it with audioEvent.getByteBuffer which also resulted in a totally distorted audio file.

The byte buffer is read from the audioRecord:

audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, 220500);
....
buffer = new byte[frameByteSize];
byte[] audioRecord.read(buffer, 0, frameByteSize);

Anybody have any idea how to fix this or suggestions for different high pass filters that I could use on a byte array in android.

Update: I figured it out. This is my updated function to convert from bytes to floats:

 public static float[] bytesToFloats(byte[] bytes) {
    float[] floats = new float[bytes.length / 2];
    short[] shorts = new short[bytes.length/2];
    ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
    for(int i=0; i < bytes.length; i+=2) {
        floats[i/2] = shorts[i/2] / 32768f;
    }
    return floats;
}

Upvotes: 2

Views: 1338

Answers (2)

jaket
jaket

Reputation: 9341

You need to convert pairs of bytes into signed short and then scale it to a float in the range of -1.0 to 1.0.

One of the following lines depending on the endianness of the data will convert to signed 16-bit.

short shortSample = (short)(bytes[i]) | (short)(bytes[i+1]) << 8);
short shortSample = (short)(bytes[i] << 8) | (short)(bytes[i+1]));

And then scale to float:

float sample = shortSample / 32768f;

Upvotes: 1

Ehz
Ehz

Reputation: 2067

Do the two bytes samples represent float values? They could be signed short within the range of -32,768 to 32,767. Also, for floating point representation of samples the values within the range of -1.0 to 1.0 are common.

I would try:

short sample = bytes[i] | (bytes[i+1] < 128 ? (bytes[i+1] << 8) : ((bytes[i+1] - 256) << 8));
floats[i/2] = (float)sample / 32,768f;

Upvotes: 1

Related Questions