Reputation: 577
I am using Core Audio (with swift wrappers) to play some audio samples (A short stimulus in which to record an impulse). I am sticking to core audio and not the newer AVFoundation as I require some strict timing and also multi device input which the newer framework doesn't cover as of yet (I went through the apple code request for them to tell me I had to use core audio).
I have for now created a very simple sine wave using:
func createSine()->[Float]{
var timeArray = makeArray(from: ((1.0/Float(sampleRate))*((-0.5)*Float(kernelLength))), to: ((1.0/Float(sampleRate))*((0.5)*Float(kernelLength))), increment: 1/sampleRate)
var sineArray = Array(repeating:0, count: timeArray.count)
for i in 0..<timeArray.count {
let x = 2 * Float.pi * 1000 * testTimeArray[i]
sineArray[i] = cos(x)
}
}
This creates a Float (which I believe to be 32-bit) Array for a sine wave at 1000Hz when played back at the sampleRate (in my case 44,100Hz)
If I write this to a wav file and play back, the tone is created how expected.
However, I actually want to trigger this sound within the app. I have setup my AUGraph and populated it with audio units. I have created an AURenderCallback which is called upon an input to a mixer. Every time, this input needs signals it calls this callback function.
let genCallback: AURenderCallback = { (
inRefCon,
ioActionFlags,
inTimeStamp,
inBusNumber,
frameCount,
ioData) -> OSStatus in
let audioObject = unsafeBitCast(inRefCon, to: AudioEngine.self)
for buffer in UnsafeMutableAudioBufferListPointer(ioData!) {
var frames = buffer.mData!.assumingMemoryBound(to: Float.self)
var j = 0
for i in stride(from: 0, to: Int(frameCount), by: 2) {
frames[i] = Float((audioObject.Stimulus[j + audioObject.stimulusReadIndex]))
j += 1
}
audioObject.stimulusReadIndex += Int(frameCount/2)
}
}
return noErr;
}
where audioObject.Stimulus is my SineArray, and audioObject.stimulusReadIndex is just a counter to remember what has been read in the array.
Now, this is where I run into trouble. If I start the AUGraph, I hear my sine wave but I hear a lot of harmonics (noise) as well. It appears that this is not the right format.
If I copy each set of frames into another Array to test that what is being written is correct, the output match the input stimulus, so there is no missing samples.
If I go and look at my AudioStreamBasicDescription for the mixer unit (as this is calling the render callback, I have the following:
var audioFormat = AudioStreamBasicDescription()
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 4;
audioFormat.mBytesPerFrame = 4;
audioFormat.mReserved = 0;
status = AudioUnitSetProperty(mixerUnit!,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
1,
&stimFormat,
UInt32(MemoryLayout<AudioStreamBasicDescription>.size));
checkStatus(status: status!);
So this tells me a few things. That it expects two channels, and is interleaved (since the Non Interleaved Flag isn't present). In my Callback function, I stride frames by 2 to only populate the first channel with samples. If I start on 1 instead, and playback the audio is written and plays back to the right hand side.
The sample rate is correct, however the bitrate is 16 (which Float is not) and I can see there is a flag for 'isSignedInteger', so this is expecting a different format.
So now, I tried converting my Float Array to Int16 using:
for i in 0..<sineArray.count{
sineArray[i] = Int16.init((32767 * sweepSamples[i]))
}
However this still results in a different noise, albeit different. If I inspect the array I can confirm that the results are signed int16 falling within the data bounds.
I cannot see how to represent this data in a format that core audio is expecting to see. I tried changing the format flag to kAudioFormatFlagIsFloat but still have no luck.
Upvotes: 1
Views: 965
Reputation: 70693
Given your [Float] data, instead of kAudioFormatFlagIsSignedInteger and 16 bits per channel, you probably want to use kAudioFormatFlagIsFloat and 32 bits (8 bytes per packet and frame).
Note that for all recent iOS devices the native audio format in 32-bit float, not 16-bit int, using a native (hardware?) sample rate of 48000, not 44100.
Also, note that Apple recommends not using Swift inside the audio callback context (see 2017 or 2018 WWDC sessions on audio), so your Audio Unit render callback should probably call a C function to do all the work (anything touching ioData or inRefCon).
You might also want to check to make sure your array index does not exceed your array bounds.
Upvotes: 0