Chuck D
Chuck D

Reputation: 43

AudioUnitRender error kAudioUnitErr_CannotDoInCurrentContext on iPhone 14 only

We have a communications application that has been out for over 8 years now on the IOS platform, and recently we have run into an issue on the new iPhone 14.

We are using the audio session category AVAudioSessionCategoryPlayAndRecord with AVAudioSessionModeVoiceChat. We are also using the kAudioUnitSubType_VoiceProcessingIO component.

Part of the CoreAudio setup sets the callback size as follows:

NSTimeInterval desiredBufferDuration = .02;

BOOL prefResult = [session setPreferredIOBufferDuration:desiredBufferDuration error:&nsError];

Then when asking for the buffer duration back with

NSTimeInterval actualBufferDuration = [session IOBufferDuration];

We get the expected .0213333, which is 1024 samples at 48kHz.

In the audio callback, we have ALWAYS received 1024 samples. In our callback, we simply log the number of samples supplied as follows:

static OSStatus inputAudioCallback (void *inRefCon,
                                    AudioUnitRenderActionFlags *ioActionFlags,
                                    const AudioTimeStamp *inTimeStamp,
                                    UInt32 inBusNumber,
                                    UInt32 inNumberFrames,
                                    AudioBufferList *ioData)
{
    NSLog (@"A %d", inNumberFrames);
}

The code is simplified, but you get the idea.

On any hardware device other than iPhone 14, we get 1024 every time from this log statement.

On the iPhone 14, we get the following sequence:

A 960 (6 times)
A 1440
A 960 (7 times)
A 1440

And it repeats. This alone for playback isn't really causing any issues, HOWEVER we also pull in the microphone during the callback. Here's the simplified code in the audio callback:

renderErr = AudioUnitRender(ioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, myIOData);

Simple call, but quite often at the 960/1440 transition the AudioUnitRender returns kAudioUnitErr_CannotDoInCurrentContext for both the last 960 and the 1440 callback.

This results in lost microphone data, causing popping/glitching in the audio.

If we switch to using the kAudioUnitSubType_RemoteIO subtype, we reliably get 1024 samples per callback and the AudioUnitRender function works correctly every time. Problem is we don't get echo cancellation so using the device hand-held is worthless in this mode.

So the question is, has something severely changed with iPhone 14 where AudioUnitRender is called during the audio callback when using the kAudioUnitySubType_VoiceProcessingIO? This is most definitely not an IOS 16 bug since there are no issues on iPhone 13 or previous models that support IOS 16.

The fact that we're not getting 1024 samples at a time kind of tells us something is really wrong, but this code has worked correctly for years now and is acting real funky on iPhone 14.

Upvotes: 3

Views: 758

Answers (1)

vgauth
vgauth

Reputation: 21

We were experiencing the same issue on our end. We believe it is an Apple issue since, as you said, nothing has changed in this API that we know of. It should be a valid way of interacting with the kAudioUnitSubType_VoiceProcessingIO Audio Unit.

As a workaround, we decided to try registering another callback for capture using the kAudioOutputUnitProperty_SetInputCallback property alongside the one we set with kAudioUnitProperty_SetRenderCallback (which would be handling both render and capture before).

AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = captureCallback;
callbackStruct.inputProcRefCon = this;
OSStatus result = AudioUnitSetProperty(audio_unit,
                                       kAudioOutputUnitProperty_SetInputCallback,
                                       kAudioUnitScope_Global,
                                       1, // input bus
                                       &callbackStruct,
                                       sizeof(callbackStruct));

Then, the captureCallback calls the AudioUnitRender and copies the data into our buffers:

OSStatus IosAudioUnit::captureCallback(AudioUnitRenderActionFlags* ioActionFlags, const AudioTimeStamp* inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList* /*ioData*/)
{
    const int currBuffSize = inNumberFrames * sizeof(float);
    bufferList.mData[0].mDataByteSize = currBuffSize;
    if (currBuffSize > MAX_BUF_SIZE) {
        printf("ERROR: currBufSize %d > MAX_BUF_SIZE %d\n");
        return kAudio_ParamError;
    }
    AudioUnitRender(audio_unit, ioActionFlags, inTimeStamp, inOutputBusNumber, inNumberFrames, &bufferList);
    // Copy the data in the bufferList into our buffers
    // ... 
}

Here's how we setup the bufferList in our IosAudioUnit constructor:

#define MAX_BUF_SIZE = 23040; // 60 ms of frames in bytes at 48000Hz

IosAudioUnit::IosAudioUnit()
{
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0].mData = malloc(MAX_BUF_SIZE);
    bufferList.mBuffers[0].mDataByteSize = MAX_BUF_SIZE;
    bufferList.mBuffers[0].mNumberChannels = 2;
}

Both the render callback and the new capture callback are still called with sample sizes of 960 and 1440 on iPhone 14, but the AudioUnit seems to handle it better, and there are no more popping/glitches in audio!

Hope this helps!

Upvotes: 2

Related Questions