Reputation: 3095
This is a two part question:
Using AVAudioRecorder
is it possible to have a waveform respond to the incoming audio in real time similar to what happens when you activate siri on the iphone. Perhaps using averagePowerForChannel
?
Also, is there a way to gather the audio samples of a recording to render a waveform?
I know novocaine exists, but I was hoping not to use a framework.
Upvotes: 3
Views: 6943
Reputation: 5265
Does not seem possible using AVAudioRecorder
by itself.
An alternative would be to use AVCaptureSession
with an AVCaptureAudioDataOutput
which provides access to the raw audio buffer, from which the wave form can be read.
Most of the processing would be done in the delegate:
func captureOutput(AVCaptureOutput!, didOutputSampleBuffer: CMSampleBuffer!, from: AVCaptureConnection!)
You would probably need to implement some sort of throttling to only process every Nth sample so that your visualiser code doesn't interfere with the audio.
AVCaptureSession
is far more rudimentary compared to AVAudioRecorder
- it does not provide any recording facilities by itself for example, and so if you wanted to also record the audio you would need to use an AVAssetWriter
to save the samples.
This SO question shows how to access the sample buffers. It uses AVAssetReader
to load a file, but the delegate is exactly the same as would be used for realtime processing:
Reading audio samples via AVAssetReader
Upvotes: 2