artem
artem

Reputation: 16798

How to use raw PCM audio data as a custom audio source in WebRTC on Android?

I'm integrating WebRTC into my project (using Java library), and as I can see, it can use pre-defined audio sources like microphone.

I've already implemented a custom audio source that generates PCM data, but I'm struggling to figure out how to inject this data into WebRTC's audio pipeline. I couldn't find any clear documentation or examples on how to achieve this.

Here are my questions:

  1. Is it possible to use raw PCM data as an audio source in WebRTC on Android?
  2. If so, how can I inject the PCM frames into WebRTC's audio capture pipeline?
  3. Are there any specific classes or interfaces in the WebRTC library that I should be using for this purpose?

Any guidance or code examples would be greatly appreciated!

Upvotes: 1

Views: 243

Answers (1)

Martin Zeitler
Martin Zeitler

Reputation: 76807

This isn't exactly a Java question, but maybe rather a C++/JNI question.
Whatever implementation one looks at, they all just wrap native assembly.
And this assembly would generally support your requirement: wav_file.cc.

This is what provides the jingle_peerconnection.so; for example:

Java wrapper for a C++ AudioSourceInterface.
Used as the source for one or more AudioTrack objects.

In order to get there, one might have to expose more native methods through JNI;
eg. one which permits passing a file-name and one which permits switching the stream.
Or even PCM with voice-over; it all depends which methods the JNI (glue) might expose.

If you'd build the assembly from source, you'd get the debug symbols, to set breakpoints.
This makes it a whole lot easier to deal with, because one doesn't deal with a black box.

Upvotes: 1

Related Questions