Johan
Johan

Reputation: 13

What happens to data in buffer on AudioTrack.write()

Here is some code that I'm using to generate a continuous sine wave in android studio. The whole thing runs within a thread. My question is: when I call audio.write(), what happens to any data that may still be in the buffer? Does it dump the old samples and write a new set, or does it append the new sample array to the remaining samples?

int buffSize = AudioTrack.getMinBufferSize(sr, AudioFormat.CHANNEL_OUT_MONO,AudioFormat.ENCODING_PCM_16BIT);
            //create the AudioTrack object
            AudioTrack audio = new AudioTrack(  AudioManager.STREAM_MUSIC,
                                                sr,
                                                AudioFormat.CHANNEL_OUT_MONO,
                                                AudioFormat.ENCODING_PCM_16BIT,
                                                buffSize,
                                                AudioTrack.MODE_STREAM);
            //initialise values for synthesis
            short samples[]= new short[buffSize];   //array the same size as buffer
            int amp=10000;                          //amplitude of the waveform
            double twopi = 8.*Math.tan(1.);         //2*pi
            double fr = 440;                        //the frequency to create
            double ph = 0;                          //phase shift
            //start audio
            audio.play();
            //synthesis loop
            while (isRunning)
            {
                fr=440+4.4*sliderVal;

                for (int i=0;i<buffSize;i++)
                {
                        samples[i]=(short)(amp*Math.sin(ph));
                        ph+=twopi*fr/sr;
                }
                audio.write(samples,0,buffSize);
            }
            //stop the audio track
            audio.stop();
            audio.release();

Upvotes: 1

Views: 2037

Answers (1)

Mark JW
Mark JW

Reputation: 496

Your are correctly setting the buffersize based on the device capability-that is very important to minimize latency.

Then you are building buffers and chunking them out to the hardware so they can be heard. There is nothing "in there" per say. Buffers get built, and you then write the whole buffer each time at track.write.

Below is my generateTone routine, much like yours. Called like such with Frequency in Hz and Duration in ms:

AudioTrack sound = generateTone(440, 250);

And the generateTone class:

private AudioTrack generateTone(double freqHz, int durationMs) {
    int count = (int)(44100.0 * 2.0 * (durationMs / 1000.0)) & ~1;
    short[] samples = new short[count];
    for(int i = 0; i < count; i += 2){
        short sample = (short)(Math.sin(2 * Math.PI * i / (44100.0 / freqHz)) * 0x7FFF);
        samples[i + 0] = sample;
        samples[i + 1] = sample;
    }
    AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
    AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
    count * (Short.SIZE / 8), AudioTrack.MODE_STATIC);
    track.write(samples, 0, count);
    return track;
}

AudioTrack is cool because you can create any kind of sound if you have the right algo. Puredata and Csound make it much easier though on Android.

(I wrote a big chapter on Audio in my book Android Software Development- Collection of Practical Projects.)

Upvotes: 4

Related Questions