mondaugen
mondaugen

Reputation: 454

Version 3 AudioUnits: minimum frameCount in internalRenderBlock

The example code for creating a version 3 AudioUnit demonstrates how the implementation needs to return a function block for rendering processing. The block will both get samples from the previous AxudioUnit in the chain via pullInputBlock and supply the output buffers with the processed samples. It also must provide some output buffers if the unit further down the chain did not. Here is an excerpt of code from an AudioUnit subclass:

- (AUInternalRenderBlock)internalRenderBlock {
    /*
        Capture in locals to avoid ObjC member lookups.
    */
    // Specify captured objects are mutable.
    __block FilterDSPKernel *state = &_kernel;
    __block BufferedInputBus *input = &_inputBus;

    return Block_copy(^AUAudioUnitStatus(
             AudioUnitRenderActionFlags *actionFlags,
             const AudioTimeStamp       *timestamp,
             AVAudioFrameCount           frameCount,
             NSInteger                   outputBusNumber,
             AudioBufferList            *outputData,
             const AURenderEvent        *realtimeEventListHead,
             AURenderPullInputBlock      pullInputBlock) {
         ...
    });

This is fine if the processing does not require knowing the frameCount before the call to this block, but many applications do require knowing the frameCount before this block in order to allocate memory, prepare processing parameters, etc.

One way around this would be to accumulate past buffers of output, outputting only frameCount samples each call to the block, but this only works if there is known minimum frameCount. The processing must be initialized with a size greater than this frame count in order to work. Is there a way to specify or obtain a minimum value for frameCount or force it to be a specific value?

The example code is taken from: https://github.com/WildDylan/appleSample/blob/master/AudioUnitV3ExampleABasicAudioUnitExtensionandHostImplementation/FilterDemoFramework/FilterDemo.mm

Upvotes: 0

Views: 301

Answers (1)

hotpaw2
hotpaw2

Reputation: 70703

Under iOS, an audio unit callback must be able to handle variable frameCounts. You can't force it to be a constant.

Thus any processing that requires a fixed size buffer should be done outside the audio unit callback. You can pass data to a processing thread by using a lock-free circular buffer/fifo or similar structure that does not require memory management in the callback.

You can suggest that the frameCount be a certain size by setting a buffer duration using the AVAudioSession APIs. But the OS is free to ignore this, depending on other audio needs in the system (power saving modes, system sounds, etc.) In my experience, the audio driver will only increase your suggested size, not decrease it (by more than a couple samples if resampling to not a power of 2).

Upvotes: 1

Related Questions