Loz
Loz

Reputation: 121

How to configure a realtime positional AudioKit player similar to 'isPositional' with a SCNAudioSource in an #ARKit scene?

The following works as a positional audio source attached to a node and updates based on the listeners position in relation to the position of the node to which the audio source is attached:

\\

    //let sound = SCNAudioSource(fileNamed: "art.scnassets/scifi.wav")!
    //sound.loops = true
    //sound.isPositional = true
    //sound.load()

\\

Using #AudioKit's #AK3DPanner and specifying the coords of the player (as below) correctly places the audio source in the 3D environment, but doesn't update the perceived position of the audio source when the listener moves in the same way SCNAudioSource.isPositional = true does.

\\

    file = AKAudioFile(fileNamed: "art.scnassets/scifi.wav")!
    player = AKPlayer(audioFile: file)
    player.isLooping = true
    player.buffering = .always
    spatialiser = AK3DPanner(player)
    spatialiser.x = Double(xSlider.value)
    spatialiser.y = Double(ySlider.value)
    spatialiser.z = Double(zSlider.value)
    AudioKit.output = spatialiser

\\

I've attempted trying to access the SCNAudioSource object and connect it to the AKPlayer, but no luck. I've also tried accessing the AVEnvironmentalNode, which I assume should be available through the instance of AK3DPanner? in order to configure binaural / HRTF output too, but not having much luck mixing and matching AudioKit with AVFoundation objects as a way of solving this problem and maintaining the flexibility and functionality of AudioKit.

Any ideas, pointers, or advice would be very much appreciated.

Upvotes: 1

Views: 288

Answers (1)

Loz
Loz

Reputation: 121

It appears that Audiokit's spatial positioning capabilities, which can be accessed via the AK3Dpanner node, move the listener in relation to a static audio source, rather than moving the audio source in relation to the known location of the listener. This proves problematic if you are trying to position static audio sources in relation to ARKit's world origin which the listener (as determined by the AVEnvironmentNode) can experience.

However, real-time interaction with spatially positioned and binaurally rendered audio sources can be realised with AVFoundation and the SCNAudioSource object in combination with ARKit. This can be achieved by setting the player's 'isPositional' option, and casting the source's player as an AVAudioPlayerNode:

let audioSource = SCNAudioSource(fileNamed: "file_name")!
    audioSource.loops = loops
    audioSource.isPositional = positional
    audioSource.load()
    let audioPlayer = SCNAudioPlayer(source: audioSource)
    let apn = audioPlayer.audioNode as? AVAudioPlayerNode
    apn?.renderingAlgorithm = .HRTFHQ  

This method allows for the individual setting of different source's rendering algorithms, which can be useful for specifying ambient or directional audio sources within your soundscape, rather than globally via AREnvironmentNode's rendering algorithm.

Using this method, you can also access other configuration options made available through AVAudioPlayerNode for individual audio sources.

Within the ARKit scene, this audioPlayer can be attached as a child audioPlayerNode to an instance of a SCNNode, which in turn is added to the scene's rootNode:

    let audioNode = SCNNode()
    node.addChildNode(audioNode)
    // Add the first node to the scene
    sceneView.scene.rootNode.addChildNode(node)
    // Add an audioPlayer to the second node
    audioNode.addAudioPlayer(audioPlayer)

This might be useful anyone attempting to create interactive virtual soundscapes with ARKit, SceneKit and AVFoundation : )

Upvotes: 1

Related Questions