Reputation: 400
I would like to modify the incoming signal in realtime and send it to the iOS devices speakers. I've read that the AVAudioEngine can be used for such tasks. But can't find documentation or examples for what I would like to achieve.
For testing purposes I've done:
audioEngine = AVAudioEngine()
let unitEffect = AVAudioUnitReverb()
unitEffect.wetDryMix = 50
audioEngine.attach(unitEffect)
audioEngine.connect(audioEngine.inputNode, to: unitEffect, format: nil)
audioEngine.connect(unitEffect, to: audioEngine.outputNode, format: nil)
audioEngine.prepare()
and if pressing a button, I just do:
do {
try audioEngine.start()
} catch {
print(error)
}
or audioEngine.stop()
.
The reverb effect gets applied to the signal and I can hear that it works. So now I would like to get rid of the reverb and:
Upvotes: 3
Views: 1256
Reputation: 936
This github repository literally made it: https://github.com/dave234/AppleAudioUnit.
Just add BufferedAudioUnit
to your project from there and subclass it with your implementation like this:
AudioProcessingUnit.h:
#import "BufferedAudioUnit.h"
@interface AudioProcessingUnit : BufferedAudioUnit
@end
AudioProcessingUnit.m:
#import "AudioProcessingUnit.h"
@implementation AudioProcessingUnit
-(ProcessEventsBlock)processEventsBlock:(AVAudioFormat *)format {
return ^(AudioBufferList *inBuffer,
AudioBufferList *outBuffer,
const AudioTimeStamp *timestamp,
AVAudioFrameCount frameCount,
const AURenderEvent *realtimeEventListHead) {
for (int i = 0; i < inBuffer->mNumberBuffers; i++) {
float *buffer = inBuffer->mBuffers[i].mData;
for (int j = 0; j < inBuffer->mBuffers[i].mDataByteSize; j++) {
buffer[j] = /*process it here*/;
}
memcpy(outBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mDataByteSize);
}
};
}
@end
And, in your AVAudioEngine setup:
let audioComponentDescription = AudioComponentDescription(
componentType: kAudioUnitType_Effect,
componentSubType: kAudioUnitSubType_VoiceProcessingIO,
componentManufacturer: 0x0,
componentFlags: 0,
componentFlagsMask: 0
);
AUAudioUnit.registerSubclass(
AudioProcessingUnit.self,
as: audioComponentDescription,
name: "AudioProcessingUnit",
version: 1
)
AVAudioUnit.instantiate(
with: audioComponentDescription,
options: .init(rawValue: 0)
) { (audioUnit, error) in
guard let audioUnit = audioUnit else {
NSLog("Audio unit is NULL")
return
}
let formatHardwareInput = self.engine.inputNode.inputFormat(forBus: 0)
self.engine.attach(audioUnit)
self.engine.connect(
self.engine.inputNode,
to: audioUnit,
format: formatHardwareInput
)
self.engine.connect(
audioUnit,
to: self.engine.outputNode,
format: formatHardwareInput
)
}
Upvotes: 3