Reputation: 2031
I would like to generate audio at the input of an AudioSource
in Unity.
I have tried using the OnAudioFilterRead()
MonoBehaviour but this only allows you to alter the audio data after the AudioSource
, which bypasses any spatialisation/pitching/gain etc on the AudioSource
component.
I also had the idea of using an AudioClip
as a buffer, filling it with the audio data and then loading that into the AudioSource
, but I don't think this could be done without knowing the buffer size and being able to load a new clip as each buffer is read. There is no method to read into the AudioClip
as each new buffer is required.
Is there any way to change the audio data of AudioSource
before it goes through the audio source's spatialisation/pitching/gain etc?
Upvotes: 1
Views: 552
Reputation: 2031
I solved this myself. It was a silly mistake. After reading the Docs for OnAudioFilterRead
properly I found that it states:
The audio data is an array of floats ranging from [-1.0f;1.0f] and contains audio from the previous filter in the chain or the AudioClip on the AudioSource. If this is the first filter in the chain and a clip isn't attached to the audio source this filter will be 'played'. That way you can use the filter as the audio clip, procedurally generating audio.
I had an audio clip in the AudioSource
which stopped the OnFilterRead
being the first process in the DSP chain. Removing the audio clip fixed the problem!
Upvotes: 1