Reputation: 989
I am trying to build a camera app which should be able to filter the frames with some filter applied on it (just for learning purposes). For that, I used the Camera2 API and OpenGL ES. I was able to apply grayscale filter on the frames so that the preview was in grayscale. Now, I wanted to record that filtered preview using MediaRecorder and I looked at the following sample to see how MediaRecorder is working with the Camera2 API ( I just added the OpenGL ES part ). But when I record, then it records the unfiltered preview and not the filtered preview. Here a demonstration. This is how the camera preview looks like when the grayscale filter is on:
And this is how it looks like when I play the recorded video after it is stored in the directory:
For me, it seems that MediaRecorder just takes the unfiltered/unprocessed frames and stores them.
Here are the relevant parts of my code:
// basically the same code from the link above
// here: mSurfaceTexture is the surface texture I created via glGenTextures()
public void startRecordingVideo() {
if (null == mCameraDevice || null == mCameraSize) {
return;
}
try {
closePreviewSession();
setUpMediaRecorder();
SurfaceTexture texture = mSurfaceTexture;
assert texture != null;
texture.setDefaultBufferSize(mCameraSize.getWidth(), mCameraSize.getHeight());
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
List<Surface> surfaces = new ArrayList<>();
// Set up Surface for the camera preview
Surface previewSurface = new Surface(texture);
surfaces.add(previewSurface);
mCaptureRequestBuilder.addTarget(previewSurface);
// Set up Surface for the MediaRecorder
Surface recorderSurface = mMediaRecorder.getSurface();
surfaces.add(recorderSurface);
mCaptureRequestBuilder.addTarget(recorderSurface);
// Start a capture session
// Once the session starts, we can update the UI and start recording
mCameraDevice.createCaptureSession(surfaces, mCameraCaptureSessionCallbackForTemplateRecord , mBackgroundHandler);
} catch (CameraAccessException | IOException e) {
e.printStackTrace();
}
}
The MediaRecorder part is also from the sample above:
private void setUpMediaRecorder() throws IOException {
final Activity activity = mActivity;
if (null == activity) {
return;
}
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
if (mNextVideoAbsolutePath == null || mNextVideoAbsolutePath.isEmpty()) {
mNextVideoAbsolutePath = getVideoFilePath(mActivity);
}
mMediaRecorder.setOutputFile(mNextVideoAbsolutePath);
mMediaRecorder.setVideoEncodingBitRate(10000000);
mMediaRecorder.setVideoFrameRate(30);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
switch (mSensorOrientation) {
case SENSOR_ORIENTATION_DEFAULT_DEGREES:
mMediaRecorder.setOrientationHint(DEFAULT_ORIENTATIONS.get(rotation));
break;
case SENSOR_ORIENTATION_INVERSE_DEGREES:
mMediaRecorder.setOrientationHint(INVERSE_ORIENTATIONS.get(rotation));
break;
}
mMediaRecorder.prepare();
}
So, how can I tell MediaRecorder to use the filtered/processed frames ? Is that possible ?
What I tried was to call setInputSurface() on the MediaRecorder instance by passing it the previewSurface variable (before that I transformed that variable to a global variable, of course, so that I could use it in the setUpMediaRecorder() method, too ). But I got the error indicating that this was not a persistent surface. In the doc for setInputSurface() it states that a persistent surface should be used (whatever that means)
I hope someone can help ?
Upvotes: 2
Views: 1571
Reputation: 57183
You cannot use MediaRecorder to work with such stream, because it can either work with input from one camera (and you have no control over the recording until you stop it), or record a Surface.
Well, in principle you could receive the color frames from the camera, convert them to grayscale, draw the result on a Surface and connect this Surface to a MediaRecorder, similar to how Camera2Video example implements slow motion recording.
Much better, compress the grayscale frames with MediaCodec
library and store the resulting encoded frames in a video file with MediaMuxer
library, similar to a camera recording example.
Upvotes: 1