Reputation: 355
I'm currently working on a program that analyses your voice. I have a thread that runs WaveInEvent that records audio from mic. When data is available (WaveInEventArgs event fires), it encodes it and serialises it and sends it to the analyser.
I'd like to be able to use an audio file, say a .mp3. I'm reading it like this :
byte[] audioFileBytes = File.ReadAllBytes(audioFilePath);
And then sending it to the analyser the same way.
I'm encountering a couple of problems : the audio file is not processed on the other end - while it works if I use the "dynamic" audio input. I'm guessing it has to do with buffer length, but I can't find how long the WaveInEvent buffer is nor how often the WaveInEventArgs event fires. Second, the data is treated in ~2 seconds, and I used to add timestamps to the data I received on the analyser side when I get it in realtime, so right now it adds one timestamp for the whole file. If I send it byte per byte, I'll get 1.2 million timestamps in about 2 seconds, which is still not printable on a graph - plus the graph should start at 0 and end at the length of the audio file (70 seconds here for the test one I have), not at 2 seconds.
So as a first step I thought about reading the audio file "in real time", and sending the audio from the file as if it came from the microphone, but I have no idea how to do that.
My question comes down to : how often do the WaveInEvent handlers fire ?
Upvotes: 1
Views: 135
Reputation: 49492
It's to do with the length of buffers. Every time a buffer is filled with audio the event will fire. Look at the BufferMilliseconds
and NumberOfBuffers
properties. You can set these to desired values before you start recording.
Upvotes: 1