Reputation: 1366
I'm building an AUGraph, and trying to get audio from the input device via an AVCaptureAudioDataOutput delegate method.
The use of an AVCaptureSession is a consequence of the problem explained here. I succesfully managed to build an audio playthrough with this method via a CARingbuffer, as explained in the book Learning Core Audio. But, getting data from the CARingbuffer implies to provide a valid sample time, and when I stop the AVCaptureSession the sample times from the AVCaptureOutput and the unit input callback are no more synced. So, I'm trying now to use the Michael Tyson's TPCircularBuffer, which seems to be excellent, according to what I've read. But, even with some examples I've found, I'm not able to get some audio from it (or only cracks).
My graph look like this :
AVCaptureSession -> callback -> AUConverter -> ... -> HALOutput
And here is the code of my AVCaptureOutput method
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *sampleBufferASBD = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
if (kAudioFormatLinearPCM != sampleBufferASBD->mFormatID) {
NSLog(@"Bad format or bogus ASBD!");
return;
}
if ((sampleBufferASBD->mChannelsPerFrame != _audioStreamDescription.mChannelsPerFrame) || (sampleBufferASBD->mSampleRate != _audioStreamDescription.mSampleRate)) {
_audioStreamDescription = *sampleBufferASBD;
NSLog(@"sample input format changed");
}
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
NULL,
_currentInputAudioBufferList,
CAAudioBufferList::CalculateByteSize(_audioStreamDescription.mChannelsPerFrame),
kCFAllocatorSystemDefault,
kCFAllocatorSystemDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&_blockBufferOut);
TPCircularBufferProduceBytes(&_circularBuffer, _currentInputAudioBufferList->mBuffers[0].mData, _currentInputAudioBufferList->mBuffers[0].mDataByteSize);
And the render callback :
OSStatus PushCurrentInputBufferIntoAudioUnit(void inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
ozAVHardwareInput *hardWareInput = (ozAVHardwareInput *)inRefCon;
TPCircularBuffer circularBuffer = [hardWareInput circularBuffer];
Float32 *targetBuffer = (Float32 *)ioData->mBuffers[0].mData;
int32_t availableBytes;
TPCircularBufferTail(&circularBuffer, &availableBytes);
UInt32 dataSize = ioData->mBuffers[0].mDataByteSize;
if (availableBytes > ozAudioDataSizeForSeconds(3.)) {
// There is too much audio data to play -> clear buffer & mute output
TPCircularBufferClear(&circularBuffer);
for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
} else if (availableBytes > ozAudioDataSizeForSeconds(0.5)) {
// SHOULD PLAY
Float32 *cbuffer = (Float32 *)TPCircularBufferTail(&circularBuffer, &availableBytes);
int32_t min = MIN(dataSize, availableBytes);
memcpy(targetBuffer, cbuffer, min);
TPCircularBufferConsume(&circularBuffer, min);
ioData->mBuffers[0].mDataByteSize = min;
} else {
// No data to play -> mute output
for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}
return noErr;
}
The TPCIrcularBuffer is fed with the AudioBufferList but nothing output, or sometimes only cracks.
What am I doing wrong ?
Upvotes: 0
Views: 1258
Reputation: 70733
An audio unit render callback should always return inNumberFrames of samples. Check to see how much data your callback is returning.
Upvotes: 0