Reputation: 33
//Declare string for application temp path and tack on the file extension
string fileName = string.Format ("Myfile{0}.wav", DateTime.Now.ToString ("yyyyMMddHHmmss"));
string audioFilePath = Path.Combine (Path.GetTempPath (), fileName);
Console.WriteLine("Audio File Path: " + audioFilePath);
url = NSUrl.FromFilename(audioFilePath);
//set up the NSObject Array of values that will be combined with the keys to make the NSDictionary
NSObject[] values = new NSObject[]
{
NSNumber.FromFloat (44100.0f), //Sample Rate
NSNumber.FromInt32 ((int)AudioToolbox.AudioFormatType.LinearPCM), //AVFormat
NSNumber.FromInt32 (2), //Channels
NSNumber.FromInt32 (16), //PCMBitDepth
NSNumber.FromBoolean (false), //IsBigEndianKey
NSNumber.FromBoolean (false) //IsFloatKey
};
//Set up the NSObject Array of keys that will be combined with the values to make the NSDictionary
NSObject[] keys = new NSObject[]
{
AVAudioSettings.AVSampleRateKey,
AVAudioSettings.AVFormatIDKey,
AVAudioSettings.AVNumberOfChannelsKey,
AVAudioSettings.AVLinearPCMBitDepthKey,
AVAudioSettings.AVLinearPCMIsBigEndianKey,
AVAudioSettings.AVLinearPCMIsFloatKey
};
//Set Settings with the Values and Keys to create the NSDictionary
settings = NSDictionary.FromObjectsAndKeys (values, keys);
//Set recorder parameters
recorder = AVAudioRecorder.Create(url, new AudioSettings(settings), out error);
//Set Recorder to Prepare To Record
recorder.PrepareToRecord();
This code works well, but how can you keep a record from a microphone directly to stream? I did not find any information in the Internet, i hope you can help me
Upvotes: 3
Views: 1552
Reputation: 74209
You are looking for buffered access to the audio stream (recording or playback), iOS provides it via Audio Queue Services
(AVAudioRecorder
is too high level), so then as audio buffers are filled, iOS calls your callback
with a filled buffer from the queue, you do something with it (save it to disk, write it to a C#-based Stream, send to a playback audio queue [speakers], etc...) and, normally, place it back into the queue for reuse.
Something like this starts recording to queue
of audio buffers:
var recordFormat = new AudioStreamBasicDescription() {
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
FramesPerPacket = 1,
ChannelsPerFrame = 1,
BitsPerChannel = 16,
BytesPerPacket = 2,
BytesPerFrame = 2,
Reserved = 0
};
recorder = new InputAudioQueue (recordFormat);
for (int count = 0; count < BufferCount; count++) {
IntPtr bufferPointer;
recorder.AllocateBuffer(AudioBufferSize, out bufferPointer);
recorder.EnqueueBuffer(bufferPointer, AudioBufferSize, null);
}
recorder.InputCompleted += HandleInputCompleted;
recorder.Start ();
So assuming a AudioBufferSize
of 8k and a BufferCount
of 3 in this example, so once the first of three buffers is filled, our handler HandleInputCompleted
is called (since there are 2 buffers still in the queue
recording continues to them.
Our InputCompleted
handler:
private void HandleInputCompleted (object sender, InputCompletedEventArgs e)
{
// We received a new buffer of audio, do something with it....
// Some unsafe code will be required to rip the buffer...
// Place the buffer back into the queue so iOS knows you are done with it
recorder.EnqueueBuffer(e.IntPtrBuffer, AudioBufferSize, null);
// At some point you need to call `recorder.Stop();` ;-)
}
(I ripped out our code from the handler as it is a custom audio-2-text learning neutral network as we use really small buffers in a very large queue to reduce feedback latency and load that audio data within single TCP/UDP packets for cloud processing (think Siri
;-)
In this handler you have access to the Pointer
to the buffer that is currently filled via InputCompletedEventArgs.IntPtrBuffer
, Using that pointer you could peek
each byte in the buffer and poke
them to your C#-based Stream if that is your goal.
Upvotes: 4