Reputation: 311
I have a small doubt that I hope you can solve it.
I want to create an android application in unity. The application consists of activating the camera of the device and viewing it on the screen. For this I want to base on a native C ++ code based on OpenCV.
I have the generated code, but when I run the application, I see the scene but not the image, and I have the feeling that it is because the VideoCapture function of OpenCV I'm not using it well for android. can you help me I attach the code:
C++:
__declspec(dllexport) void iniciar(int& widt, int& heigh) {
camera.open(0);
if (!camera.isOpened())
{
return;
}
widt = (int)camera.get(CV_CAP_PROP_FRAME_WIDTH);
heigh = (int)camera.get(CV_CAP_PROP_FRAME_HEIGHT);
trueRect.x = 5;
trueRect.y = 5;
trueRect.width = 100;
trueRect.height = 100;
midX = 1;
midY = 1;
wi = 0;
he = 0;
}
__declspec(dllexport)
void video(unsigned char* arr) {
Mat frame;
Mat resi;
Mat dst;//dst image
camera >> frame;
if (frame.empty()) {
return;
}
flip(frame, dst,1);
//resize(dst, resi, Size(width, height));
cv::cvtColor(dst, dst, COLOR_BGR2RGB);
copy(dst.datastart, dst.dataend, arr);
}
C#:
public class camara : MonoBehaviour {
[DllImport("NativoPrincipio")]
public static extern void video(byte[] img);
[DllImport("NativoPrincipio")]
public static extern void iniciar(ref int widt, ref int heigh);
WebCamTexture back;
Texture2D textura;
byte[] imgData;
int width = 0;
int height = 0;
// Use this for initialization
void Start () {
back = new WebCamTexture();
//GetComponent<Renderer>().material.mainTexture = back;
//back.Play();
iniciar(ref width, ref height);
}
// Update is called once per frame
void Update ()
{
imgData = new byte[width * height * 4];
video(imgData);
textura = new Texture2D(width, height, TextureFormat.RGB24, false);
textura.LoadRawTextureData(imgData);
textura.Apply();
GetComponent<Renderer>().material.mainTexture = textura;
imgData = null;
textura = null;
}}
Upvotes: 4
Views: 653
Reputation: 125455
After staring at your code for few minutes, I discovered several mistakes in your code.
1.You used COLOR_BGR2RGB
on the C++ side.That should be COLOR_BGR2BGRA
since you are using TextureFormat.RGB24
on the C# side.
2.If you want to modify a C# array inside the C++ plugin, you have to must pin it in the memory. After you modify it you can unpin it. If you don't do this, you will run into a bug that will take you forever to find.
Use the fixed
keyword to pin the array then cast it to IntPtr
and pass it to the video function as IntPtr
. It is always a good idea to also pass in the array size so that you will not use index that do not exist.
Something like this:
[DllImport("NativoPrincipio")]
public static extern void video(IntPtr img, int count);
...
//Pin Memory
fixed (byte* p = imgData)
{
video((IntPtr)p, imgData.Length);
}
3.Finally, this is optional but you may want to use Java to read the image from the camera then pass the image to OpenCV with C++. I've seen many people encounter problems by using OpenCV to read from the camera and not to mention that it is also slow.
Upvotes: 1
Reputation: 2229
If you have hosted your whole code on GitHub or if you can create a gist could be better.
But, I think that your camera frame is null.
I will try to enumerate why this could happen.
Firstly, the opencv video capture class camera.open(0) does not work without ffmpeg (is not easy to crosscompile for android Native).
I see that you are using WebCamTexture class from Unity,but almost probably this grabbed texture is empty, and therefore you see the black screen in the image.
Do you try to search a unity plugin for opencv? The only I have seen is this (not free): https://www.assetstore.unity3d.com/en/#!/content/21088
If this does not work, could you have any way to paste java code for unity projects? Because camera capture in java is so much easier. Hope these clues help you.
Ondo izan eta animo.
Unai.
Upvotes: 2