Reputation: 702
This is the function we are using to setup the AudioStreamBasicDescription (ASBD). The entire class is influenced heavily by the SpeakHere example application from Apple.
This is for recording on an iOS device (specifically an iPad Air, also tested on an iPhone 6 Plus).
When starting an AudioQueue buffer for recording I get a 'AudioConverterNew returned -50' error in the device logs. Apparently that just means something isn't correct somewhere? I'm guessing there is something wrong with my ASBD.
- (void)setupAudioFormat
{ // setup AudioStreamBasicDescription
memset(&mDataFormat, 0, sizeof(mDataFormat));
int sampleSize = sizeof(float);
AVAudioSession *session = [AVAudioSession sharedInstance];
mDataFormat.mSampleRate = [session sampleRate]; //44100
mDataFormat.mChannelsPerFrame = [session inputNumberOfChannels]; // why is this returning 0?
if ( mDataFormat.mChannelsPerFrame <= 0 )
{
mDataFormat.mChannelsPerFrame = 1; // mono
}
mDataFormat.mFormatID = kAudioFormatLinearPCM;
mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonInterleaved;
mDataFormat.mBitsPerChannel = 8 * sampleSize;
mDataFormat.mBytesPerFrame = sampleSize * mDataFormat.mChannelsPerFrame;
mDataFormat.mBytesPerPacket = mDataFormat.mBytesPerFrame;
mDataFormat.mFramesPerPacket = 1;
mDataFormat.mReserved = 0;
NSLog(@"FORMAT sampleRate=%f, channels=%i, flags=%i, BPC=%i, BPF=%i", mDataFormat.mSampleRate, mDataFormat.mChannelsPerFrame, mDataFormat.mFormatFlags, mDataFormat.mBitsPerChannel, mDataFormat.mBytesPerFrame);
}
The SpeakHere example application uses Linear PCM as well but it is setup using signed 16-bit little-endian instead of float.
Ultimately I need to pass the audio buffer data on to something else as a float array. Am I going to have to change this to use signed 16-bit little-endian and convert the results into float values?
Upvotes: 0
Views: 1628
Reputation: 2211
Your line below does not make sense.
mDataFormat.mChannelsPerFrame = [session inputNumberOfChannels]; // why is this returning 0?
The code [session inputNumberOfChannels]
gets the number of channels currently available to the system. If you use only the built in mic, it could return 1 or 2 channels. If you connect a usb preamp, you could have 4 or 8 or more channels of input available. So it does not make sense to set your format based on the number of input channels available. You typically set the recording format to be mono or stereo and route the inputs as you wish.
At some point you want to activate your session using:
BOOL activated = [[AVAudioSession sharedInstance] setActive:YES error:&error];
NSLog(@"AVAudioSession activated %i",activated);
The ASBD you set up above is ok. I tested it in an app I have and the format itself does not raiser errors when use it with AudioUnitSetProperty. Post more code where you start your AudioQueue.
Upvotes: 1