Reputation: 935
In java you can create a SourceDataLine like so:
AudioFormat af = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100.0, 16, 1, 2, 44100.0, false);
SourceDataLine sdl = AudioSystem.getSourceDataLine(af);
After which you can open and then write data to it:
byte[] data = new byte[1024];
fillwithsounds(data);
sdl.open()
sdl.start()
sdl.write(data, 0, 1024);
This all works fine for mono data.
What I'd like to do is to be able to write stereo data, and I can't find any documentation online on how I need to change my byte array to be able to write stereo data.
It seems like I need to increase the amount of channels when I create the AudioFormat - to make it stereo - and then I need to half the framerate (otherwise Java throws an IllegalArgumentException)
I don't understand why this is though, or what the new format should be for the data that I feed to the DataSourceLine.
Perhaps somebody with a little more experience in audio formats than I could shed some light on this problem. Thanks in advance!
Upvotes: 1
Views: 1331
Reputation: 7910
The format I use for stereo is as follows:
audioFmt = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
44100, 16, 2, 4, 44100, false);
You probably want to double the bytes per frame instead of halving your bits-encoding. I'm not sure what 8-bit encoding sounds like, but it is definitely going to be noisier than 16-bit encoding!
The resulting file is twice as long. You can then take the two-byte pairs that make the 16-bit sample and copy them into the next two positions, for "mono" playback (both stereo channels identical).
Given:
frame = F
little end byte = A
big end byte = B
AB = 16-bit little-endian encoding
left channel = L
right channel = R
Your original mono:
F1A, F1B, F2A, F2B, F3A, F3B ...
Stereo using the above format:
F1AL, F1BL, F1AR, F1BR, F2AL, F2BL, F2AR, F2BR, F3AL, F3BL, F3AR, F3BR ...
I could very well have the order of left and right mixed up. But I hope you get the idea!
Upvotes: 2
Reputation: 935
I found out the solution just now, and found Andrew Thompson's comment to explain exactly what I needed.
I figured that I'd have to write each frame twice, what caught me up was the fact that Java wouldn't let me just use the frame size I had for my mono channel. (It threw an IllegalArgumentException)
So I halved the framerate to satisfy Java, but I didn't remember to modify the byte array.
I've implemented code that takes the "2 bytes per frame, 1 channel" byte[] and converts it into a "1 byte per frame, 2 channel" byte[].
private static final float LEFT = -1;
private static final float CENTER = 0;
private static final float RIGHT = 1;
private byte[] monoToStereo(byte[] data, float pan){
byte[] output = new byte[data.length];
for (int i = 0; i < (data.length - 2); i+=2){
int currentvalue = (data[i+1]*256 + data[i])/(256*2);
if (pan == LEFT || pan == CENTER){
output[i] = (byte) currentvalue;
} else {
output[i] = 0;
}
if (pan == RIGHT || pan == CENTER){
output[i+1] = (byte) currentvalue;
} else {
output[i+1] = 0;
}
}
return output;
}
Using this, I can get stereo audio to playback (although there is soft static, I can clearly hear the original track)
Upvotes: 0