Michael Dorner
Michael Dorner

Reputation: 20145

Realtime Audio with AVAudioEngine

Hej. I want to implement a realtime audio application with the new AVAudioEngine in Swift. Has someone experience with the new framework? How does real time applications work?

My first idea was to store the (processed) input data into a AVAudioPCMBuffer object and then let it play by an AVAudioPlayerNode as you can see in my demo class:

import AVFoundation

class AudioIO {
    var audioEngine: AVAudioEngine
    var audioInputNode : AVAudioInputNode
    var audioPlayerNode: AVAudioPlayerNode
    var audioMixerNode: AVAudioMixerNode
    var audioBuffer: AVAudioPCMBuffer

    init(){
        audioEngine = AVAudioEngine()
        audioPlayerNode = AVAudioPlayerNode()
        audioMixerNode = audioEngine.mainMixerNode

        let frameLength = UInt32(256)
        audioBuffer = AVAudioPCMBuffer(PCMFormat: audioPlayerNode.outputFormatForBus(0), frameCapacity: frameLength)
        audioBuffer.frameLength = frameLength

        audioInputNode = audioEngine.inputNode

        audioInputNode.installTapOnBus(0, bufferSize:frameLength, format: audioInputNode.outputFormatForBus(0), block: {(buffer, time) in
            let channels = UnsafeArray(start: buffer.floatChannelData, length: Int(buffer.format.channelCount))
            let floats = UnsafeArray(start: channels[0], length: Int(buffer.frameLength))

            for var i = 0; i < Int(self.audioBuffer.frameLength); i+=Int(self.audioMixerNode.outputFormatForBus(0).channelCount)
            {
                // doing my real time stuff
                self.audioBuffer.floatChannelData.memory[i] = floats[i];
            }
            })

        // setup audio engine
        audioEngine.attachNode(audioPlayerNode)
        audioEngine.connect(audioPlayerNode, to: audioMixerNode, format: audioPlayerNode.outputFormatForBus(0))
        audioEngine.startAndReturnError(nil)

        // play player and buffer
        audioPlayerNode.play()
        audioPlayerNode.scheduleBuffer(audioBuffer, atTime: nil, options: .Loops, completionHandler: nil)
    }
}

But this is far away from real time and not very efficient. Any ideas or experiences? And it does not matter, if you prefer Objective-C or Swift, I am grateful for all notes, remarks, comments, solutions, etc.

Upvotes: 18

Views: 23254

Answers (4)

hotpaw2
hotpaw2

Reputation: 70693

Apple has taken an official position on real-time coding in Swift. In the 2017 WWDC session on Core Audio, an Apple engineer said not to use either Swift or Objective C methods inside the real-time audio callback context (implying use only C, or maybe C++ or assembly code).

This is because use of Swift and Objective C methods can involve internal memory management operations that do not have bounded latency.

This implies the proper scheme might be to use Objective C for your AVAudioEngine controller class, but only the C subset (of Objective C) inside the tapOnBus block.

Upvotes: 5

Lloyd Rochester
Lloyd Rochester

Reputation: 719

According to Apple at the WWDC2017 in the What's New in Core Audio Video at approximately 19:20 the Engineer says using Swift is "not safe" from a real-time context. Posted from the transcript.

Now, because the rendering of the Engine happens from a real-time context, you will not be able to use the render offline Objective-C or Swift meta that we saw in the demo.

And that is because, it is not safe to use Objective-C or Swift runtime from a real-time context. So, instead, the engine itself provides you a render block that you can search and cache, and then later use this render block to render the engine from the real-time context. The next thing is -- to do, is to set up your input node so that you can provide your input data to the Engine. And here, you specify the format of the input that you will provide, and this can be a different format than the output. And you also provide a block which the Engine will call, whenever it needs the input data. And when this block gets called, the Engine will let you know how many number of input frames it actually needs.

Upvotes: 2

hungrxyz
hungrxyz

Reputation: 854

I think this does what you want: https://github.com/arielelkin/SwiftyAudio

Comment out the distortions and you have a clean loop.

Upvotes: 7

Jason McClinsey
Jason McClinsey

Reputation: 446

I've been experimenting with AVAudioEngine in both Objective-C and Swift. In the Objective-C version of my engine, all audio processing is done purely in C (by caching the raw C sample pointers available through AVAudioPCMBuffer, and operating on the data with only C code). The performance is impressive. Out of curiosity, I ported this engine to Swift. With tasks like playing an audio file linearly, or generating tones via FM synthesis, the performance is quite good, but as soon as arrays are involved (e.g. with granular synthesis, where sections of audio are played back and manipulated in a non-linear fashion), there is a significant performance hit. Even with the best optimization, CPU usage is 30-40% greater than with the Objective-C/C version. I'm new to Swift, so perhaps there are other optimizations of which I am ignorant, but as far as I can tell, C/C++ are still the best choice for realtime audio. Also look at The Amazing Audio Engine. I'm considering this, as well as direct use of the older C API.

If you need to process live audio, then AVAudioEngine may not be for you. See my answer to this question: I want to call 20 times per second the installTapOnBus:bufferSize:format:block:

Upvotes: 12

Related Questions