ignatius
ignatius

Reputation: 199

How to parse interleaved buffer into distinct multiple channel buffers with PortAudio

hope you can help me :)

I'm trying to get audio data from a multichannel ASIO device with PortAudio library. Everything is ok: I managed to set the default host API as ASIO and I also managed to select 4 specific channels as inputs. Then,I get an interleaved audio stream which sounds correctly but i would like to get each channel data separately.

PortAudio allows to do a non-interleaved recording, but I don't know how to write or modify my RecordCallBack and the multibuffer pointer (one buffer per channel). Sure I've tried... :(

It would be of massive help to me if someone knows how to deal with this issue.

The original RecordCallBack function is taken from a well known stereo example (slightly modified to manage 4 channles instead of 2) but it manages a single interleaved buffer:

static int recordCallback( const void *inputBuffer, void *outputBuffer,
                       unsigned long framesPerBuffer,
                       const PaStreamCallbackTimeInfo* timeInfo,
                       PaStreamCallbackFlags statusFlags,
                       void *userData )
{
paTestData *data = (paTestData*)userData;
const short *rptr = (const short*)inputBuffer;
short *wptr = &data->recordedSamples[data->frameIndex * NUM_CHANNELS_I];
long framesToCalc;
long i;
int finished;
unsigned long framesLeft = data->maxFrameIndex - data->frameIndex;

(void) outputBuffer; /* Prevent unused variable warnings. */
(void) timeInfo;
(void) statusFlags;
(void) userData;

if( framesLeft < framesPerBuffer )
{
    framesToCalc = framesLeft;
    finished = paComplete;
}
else
{
    framesToCalc = framesPerBuffer;
    finished = paContinue;
}

if( inputBuffer == NULL )
{
    for( i=0; i<framesToCalc; i++ )
    {
        *wptr++ = SAMPLE_SILENCE;  /* ch1*/
        if( NUM_CHANNELS_I == 4 ){
            *wptr++ = SAMPLE_SILENCE;/* ch2*/
            *wptr++ = SAMPLE_SILENCE;/* ch3*/
            *wptr++ = SAMPLE_SILENCE;}  /* ch4*/
    }
}
else
{
    for( i=0; i<framesToCalc; i++ )
    {
        *wptr++ = *rptr++;  /* ch1*/
        if( NUM_CHANNELS_I == 4 ){ 
            *wptr++ = *rptr++;/* ch2*/
            *wptr++ = *rptr++;/* ch3*/
            *wptr++ = *rptr++;}  /* ch4*/
    }
}
data->frameIndex += framesToCalc;

return finished;
}

The *inputbuffer pointer is declared as:

PaStream* stream;

And the Open_Stream function is called:

err = Pa_OpenStream(
          &stream,
          NULL, /* no input */
          &outputParameters,
          SAMPLE_RATE,
          FRAMES_PER_BUFFER,
          paClipOff,      /* we won't output out of range samples so don't bother clipping them */
          playCallback,
          &data );

Upvotes: 3

Views: 2310

Answers (2)

ignatius
ignatius

Reputation: 199

Thank's Scott for your help. The solution was right in front of my eyes and I finally didn't have to work with samples offset. I didn't give you enough information about the code, so your approach was excellent, but the code itself provides an easier way to do that:

The data is storaged in an structrue:

typedef struct
{
int          frameIndex;  /* Index into sample array. */
int          maxFrameIndex;
short       *recordedSamples;
}
paTestData;

I modified it to:

typedef struct
{
int          frameIndex;  /* Index into sample array. */
int          maxFrameIndex;
short       *recordedSamples;
short       * recordedSamples2; //ch2
short       * recordedSamples3; //ch3
short       *recordedSamples4; //ch4
}
paTestData;

Then I just had to allocate this variables in memory and modified the recordCallback function as follows:

static int recordCallback( const void *inputBuffer, void *outputBuffer,
                       unsigned long framesPerBuffer,
                       const PaStreamCallbackTimeInfo* timeInfo,
                       PaStreamCallbackFlags statusFlags,
                       void *userData )
 {
 paTestData *data = (paTestData*)userData;
 const short *rptr = (const short*)inputBuffer;

short *wptr = &data->recordedSamples[data->frameIndex];
short *wptr2=&data->recordedSamples2[data->frameIndex];
short *wptr3=&data->recordedSamples3[data->frameIndex];
short *wptr4=&data->recordedSamples4[data->frameIndex];
long framesToCalc;
long i;
int finished;
unsigned long framesLeft = data->maxFrameIndex - data->frameIndex;

(void) outputBuffer; /* Prevent unused variable warnings. */
(void) timeInfo;
(void) statusFlags;
(void) userData;

if( framesLeft < framesPerBuffer )
{
    framesToCalc = framesLeft;
    finished = paComplete;
}
else
{
    framesToCalc = framesPerBuffer;
    finished = paContinue;
}

if( inputBuffer == NULL )
{
    for( i=0; i<framesToCalc; i++ )
    {
        *wptr++ = SAMPLE_SILENCE;  //ch1
        if( NUM_CHANNELS_I == 4 ){
            *wptr2++ = SAMPLE_SILENCE;//ch2
            *wptr3 ++= SAMPLE_SILENCE;//ch3
            *wptr4++ = SAMPLE_SILENCE;}  //ch4
    }
}
else
{
    for( i=0; i<framesToCalc; i++ )
    {
        *wptr++ = *rptr++;  //ch1
        if( NUM_CHANNELS_I == 4 ){ 
            *wptr2++ = *rptr++;//ch2
            *wptr3++ = *rptr++;//ch3
            *wptr4 ++= *rptr++;}  //ch4
    }
}
data->frameIndex += framesToCalc;

return finished;
}

Hope this can help other people. And thank's again, Scott

Upvotes: 1

Scott Stensland
Scott Stensland

Reputation: 28285

Interleaved just means the bytes for each channel follow after each other as in :

aabbccddeeaabbccddeeaabbccddee (each character represents one byte)

where this input buffer contains two bytes (16 bits) per each of the 5 channels : a, b, c, d & e as it makes 3 repeats across the set of channels which equates to 3 samples per channel ... so knowing input is interleaved, it could be extracted into separate output channel buffers one per channel, but in your code you have just a single output buffer which as you say is due to the necessary callback signature ... one approach would be to write each output channel into the single output buffer separated by distinct offsets per channel so output would be

aaaaaabbbbbbccccccddddddeeeeee 

then outside the callback extract out each channel also using same offset per channel

First you need to obtain size of the given output buffer, say X, number of channels, Y, and number of bytes per channel per sample, Z. So global channel offset would be

size_offset = X / (Y * Z) # assure this is an integer 
                          # if its a fraction then error in assumptions

so when addressing output buffer both inside callback and outside we use this offset and knowledge of which channel we are on, W (values 0, 1, 2, 3, ...), and which sample K :

index_output_buffer = K + (W * size_offset)     # 1st byte of sample pair

now use index_output_buffer ... then calculate follow-on index :

index_output_buffer = K + (W * size_offset) + 1 # 2nd byte of sample pair

and use it ... you could put above two commands for a given sample into a loop using Z to control number of iterations if Z were to vary but above assumes samples are two bytes

Upvotes: 1

Related Questions