Reputation: 65
I'm working on an iOS app. The app is aimed to simulate a hearing aid. It should be able to first record the sound, then modify the sound (like filtering or spectral enhancement), and at last playback. The whole process should be done real timely, which means the delay between record and playback should be within 1 second. I have read the sample code of aurioTouch, and found that there is a callback function that is called when recording. Can I just modify the sound in that callback? My worry is the modification process is a bit time consuming (say about 0.3 seconds). Can I just do it in callback? If not, why? And what should I do?
Upvotes: 1
Views: 147
Reputation: 2211
You might have a typo in your question "the delay between record and playback should be within 1s" (One second is a very HIGH latency for an audio app... especially for a hearing aid. The lip movement and audio would be way out of sync)
Without any processing, you can get a audio from the mic into the speaker/earphones in the range of 5ms to 10ms depending on how you set up your RemoteIO.
Your system is pretty simple. It's just a loopback system but you want to insert some signal processing in between. (such as an eq or maybe other enhancements like background noise canceling, echo suppression, etc).
You need to decide if you are doing the signal processing manually or if you want to use the available signal processing audio units currently available in iOS. If you want to use the existing audio units in ios, the approach would be to create an audio graph with the following items - remote io (hook to the mic on input bus 1, speaker on output bus 1) - audio unit effect
If you want to write all of the DSP stuff yourself, you only need a remoteIO unit.
To you answer your question "Can I just modify the sound in that callback"
Yes - You can write your signal processing code in the callback of the microphone. However, you must complete all processing before the next callback happens. If your audio processing is taking too long, you need to increase the callback time (thus adding more latency) by suggesting a new buffer duration size with AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,...
You mention your processing takes long "(say about 0.3s)"
That is 300ms and definitely not considered "near realtime" or "low latency" in the audio world. However, the measurement you give is open ended because you do not state how much audio is processed? Does it take 0.3s to process 5 seconds of audio? Does it take 0.3s to process 0.1s of audio?
Upvotes: 1