Reputation: 3102
I am writing an Android application which records video for a specified amount of time. Everything works fine if I record using the smartphone's back camera. The app has a feature to pause/record feature like in Vine app. The issue comes when recording using the device's front camera. The video surface frame looks fine when storing/playing the video the video is upside down. There is a lot of things discussed about this issue everywhere. But I didn't find any solution that WORKS.
Have a look at the code and image mentioned below.
Here is the original image taken from front camera. I have turned it upside down for a better view.
Here is what I actually get after rotation:
Method:
IplImage copy = cvCloneImage(image);
IplImage rotatedImage = cvCreateImage(cvGetSize(copy), copy.depth(), copy.nChannels());
//Define Rotational Matrix
CvMat mapMatrix = cvCreateMat(2, 3, CV_32FC1);
//Define Mid Point
CvPoint2D32f centerPoint = new CvPoint2D32f();
centerPoint.x(copy.width() / 2);
centerPoint.y(copy.height() / 2);
//Get Rotational Matrix
cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);
//Rotate the Image
cvWarpAffine(copy, rotatedImage, mapMatrix, CV_INTER_CUBIC + CV_WARP_FILL_OUTLIERS, cvScalarAll(170));
cvReleaseImage(copy);
cvReleaseMat(mapMatrix);
I have tried doing
double angleTemp = angle;
angleTemp= ((angleTemp / 90)%4)*90;
final int number = (int) Math.abs(angleTemp/90);
for(int i = 0; i != number; ++i){
cvTranspose(rotatedImage, rotatedImage);
cvFlip(rotatedImage, rotatedImage, 0);
}
Ends up in throwing exception saying that source and destination doesn't match with number of columns and rows.
Update:
Video is recorded in this way.
IplImage newImage = null;
if(cameraSelection == CameraInfo.CAMERA_FACING_FRONT){
newImage = videoRecorder.rotate(yuvIplImage, 180);
videoRecorder.record(newImage);
}
else
videoRecorder.record(yuvIplImage);
Rotation is done in this way:
IplImage img = IplImage.create(image.height(), image.width(),
image.depth(), image.nChannels());
for (int i = 0; i < 180; i++) {
cvTranspose(image, img);
cvFlip(img, img, 0);
}
Can anyone point out what is wrong here if you have experienced this before?
Upvotes: 1
Views: 2639
Reputation: 849
This piece of code will help you handle problem when rotate IplImage
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
//IplImage newImage = cvCreateImage(cvGetSize(yuvIplimage), IPL_DEPTH_8U, 1);
if (recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
yuvimage = IplImage.create(imageWidth, imageHeight * 3 / 2, IPL_DEPTH_8U,1);
yuvimage.getByteBuffer().put(data);
rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3);
opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);
IplImage rotateimage=null;
try {
recorder.setTimestamp(videoTimestamp);
int rot=0;
switch (degrees) {
case 0:
rot =1;
rotateimage=rotate(rgbimage,rot);
break;
case 180:
rot = -1;
rotateimage=rotate(rgbimage,rot);
break;
default:
rotateimage=rgbimage;
}
recorder.record(rotateimage);
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
}
}
IplImage rotate(IplImage IplSrc,int angle) {
IplImage img= IplImage.create(IplSrc.height(), IplSrc.width(), IplSrc.depth(), IplSrc.nChannels());
cvTranspose(IplSrc, img);
cvFlip(img, img, angle);
return img;
}
}
Upvotes: 0
Reputation: 311
Seeing that you already have an IplImage, you may be able to find this helpful. I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
Also initialize video recorder like this, sending width as height and vice versa:
//call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);
//method implementation
public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
{
Log.e(TAG, "initVideoRecorder");
videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
videoRecorder.setVideoBitrate(1000000);
videoRecorder.setAudioBitrate(64000);
}
This is my onPreviewFrame() method:
@Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically I had the same problem as yours, "The Green Devil problem on Android". Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video, almost like yours. This "YUV_NV21_TO_BGR" method removed greenish output problem. Thanks to @David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes a very serious hit on your Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
Upvotes: 4
Reputation: 1504
private void ChangeOrientation() throws com.googlecode.javacv.FrameGrabber.Exception, com.googlecode.javacv.FrameRecorder.Exception {
//Initialize Frame Grabber
File f = new File(nativePath);
frameGrabber = new FFmpegFrameGrabber(f);
frameGrabber.start();
Frame captured_frame = null;
//Initialize Recorder
initRecorder() ;
//Loop through the grabber
boolean inLoop=true;
while (inLoop)
{
captured_frame = frameGrabber.grabFrame();
if (captured_frame == null)
{
//break loop
inLoop=false;
}
else if(inLoop)
{
// continue looping
IplSrc=captured_frame.image;
recorder.reocord(rotateImg(IplSrc));
}
}
if (recorder != null )
{
recorder.stop();
recorder.release();
frameGrabber.stop();
initRecorder=false;
}
}
private void initRecorder() throws com.googlecode.javacv.FrameRecorder.Exception
{
recorder = new FFmpegFrameRecorder(editedPath,
frameGrabber.getImageWidth(),
frameGrabber.getImageHeight(),
frameGrabber.getAudioChannels());
recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
recorder.setFormat("mp4");
recorder.setFrameRate(frameGrabber.getFrameRate());
FrameRate=frameGrabber.getFrameRate();
}
recorder.setSampleFormat(frameGrabber.getSampleFormat());
recorder.setSampleRate(frameGrabber.getSampleRate());
recorder.start();
initRecorder=true;
}
Upvotes: 0
Reputation: 1504
When you do the transpose, the images width and height value gets replaced. Its like you rotate a rectangle by 90 degrees so that the height becomes width and vice-versa. So you need to do some thing like below:
IplImage rotate(IplImage IplSrc)
{
IplImage img= IplImage.create(IplSrc.height(),
IplSrc.width(),
IplSrc.depth(),
IplSrc.nChannels());
cvTranspose(IplSrc, img);
cvFlip(img, img, 0);
//cvFlip(img, img, 0);
return img;
}
Upvotes: 0