Reputation: 95
In the project, I'm using Unity3D's C# scripts and C++ via DllImports. My goal is that the game scene has 2 cubes (Cube & Cube2)
which one texture of cube show the live video through my laptop camera and Unity's webCamTexture.Play()
and the other one texture of cube show the processed video by external C++ function ProcessImage()
.
Code Context:
In c++, I define that
struct Color32
{
unsigned char r;
unsigned char g;
unsigned char b;
unsigned char a;
};
and the function is
extern "C"
{
Color32* ProcessImage(Color32* raw, int width, int height);
}
...
Color32* ProcessImage(Color32* raw, int width, int height)
{
for(int i=0; i<width*height ;i++)
{
raw[i].r = raw[i].r-2;
raw[i].g = raw[i].g-2;
raw[i].b = raw[i].b-2;
raw[i].a = raw[i].a-2;
}
return raw;
}
C#:
Declare and importing
public GameObject cube;
public GameObject cube2;
private Texture2D tx2D;
private WebCamTexture webCamTexture;
[DllImport("test22")] /*the name of Plugin is test22*/
private static extern Color32[] ProcessImage(Color32[] rawImg,
int width, int height);
Get camera situation and set cube1, cube2 Texture
void Start()
{
WebCamDevice[] wcd = WebCamTexture.devices;
if(wcd.Length==0)
{
print("Cannot find a camera");
Application.Quit();
}
else
{
webCamTexture = new WebCamTexture(wcd[0].name);
cube.GetComponent<Renderer>().material.mainTexture = webCamTexture;
tx2D = new Texture2D(webCamTexture.width, webCamTexture.height);
cube2.GetComponent<Renderer>().material.mainTexture = tx2D;
webCamTexture.Play();
}
}
Send data to external C++ function by DllImports
and receive processed data using Color32[] a
. Finally, I'm using Unity's SetPixels32
to setup tx2D
(Cube2) texture:
void Update()
{
Color32[] rawImg = webCamTexture.GetPixels32();
System.Array.Reverse(rawImg);
Debug.Log("Test1");
Color32[] a = ProcessImage(rawImg, webCamTexture.width, webCamTexture.height);
Debug.Log("Test2");
tx2D.SetPixels32(a);
tx2D.Apply();
}
Results:
The result is just the texture of cube 1 show the live video and fail to show processed data using the texture of cube 2.
Error:
SetPixels32 called with invalid number of pixels in the array UnityEngine.Texture2D:SetPixels32(Color32[]) Webcam:Update() (at Assets/Scripts/Webcam.cs:45)
I don't understand that why the invalid number of pixels in the array when I input array a to SetPixels32
Any ideas?
UPDATE(10 Oct. 2018)
Thanks to @Programmer, now it can work by pin memory.
Btw, I find some little problem which is about Unity Engine. When the Unity Camera run between 0 to 1 second, webCamTexture.width
or webCamTexture.height
always return 16x16
size even requested bigger image such as 1280x720 and then it will return correct size after 1 second. (Possibly several frames) So, I reference this post and delay 2 seconds to run Process()
in Update()
function and reset the Texture2D size in Process()
function. It will work fine:
delaytime = 0;
void Update()
{
delaytime = delaytime + Time.deltaTime;
Debug.Log(webCamTexture.width);
Debug.Log(webCamTexture.height);
if (delaytime >= 2f)
Process();
}
unsafe void Process()
{
...
if ((Test.width != webCamTexture.width) || Test.height != webCamTexture.height)
{
Test = new Texture2D(webCamTexture.width, webCamTexture.height, TextureFormat.ARGB32, false, false);
cube2.GetComponent<Renderer>().material.mainTexture = Test;
Debug.Log("Fixed Texture dimension");
}
...
}
Upvotes: 5
Views: 2227
Reputation: 125305
C# doesn't return understand the Color32
object you returned from C++. This would have worked if you made the return type to be unsigned char*
then use Marshal.Copy
on the C# side to copy the returned data into byte array then use Texture2D.LoadRawTextureData
to load the array into your Texture2D
. Here is an example that returns byte array from C#.
I wouldn't suggest you return data as that's costly. Fill the array you are passing to the function then simply re-assign that array to your Texture2D
.
C++:
extern "C"
{
void ProcessImage(unsigned char* raw, int width, int height);
}
...
void ProcessImage(unsigned char* raw, int width, int height)
{
for(int y=0; y < height; y++)
{
for(int x=0; x < width; x++)
{
unsigned char* pixel = raw + (y * width * 4 + x * 4);
pixel[0] = pixel[0]-(unsigned char)2; //R
pixel[1] = pixel[1]-(unsigned char)2; //G
pixel[2] = pixel[2]-(unsigned char)2; //B
pixel[3] = pixel[3]-(unsigned char)2; //Alpha
}
}
}
C#:
[DllImport("test22")]
private static extern void ProcessImage(IntPtr texData, int width, int height);
unsafe void ProcessImage(Texture2D texData)
{
Color32[] texDataColor = texData.GetPixels32();
System.Array.Reverse(texDataColor);
//Pin Memory
fixed (Color32* p = texDataColor)
{
ProcessImage((IntPtr)p, texData.width, texData.height);
}
//Update the Texture2D with array updated in C++
texData.SetPixels32(texDataColor);
texData.Apply();
}
To use:
public Texture2D tex;
void Update()
{
ProcessImage(tex);
}
If you don't want to use the unsafe
and fixed
keywords, you can also use GCHandle.Alloc
to pin the array before sending it to the C++ side with GCHandle.AddrOfPinnedObject
. See this post for how to do that.
If you are getting the following exception:
SetPixels32 called with invalid number of pixels in the array UnityEngine.Texture2D:SetPixels32(Color32[]) Webcam:Update() (at Assets/Scripts/Webcam.cs:45)
This means that the Texture2D
you're calling SetPixels32
on has different size or texture dimension with the WebCamTexture
texture. To fix this, before calling SetPixels32
, check if the texture dimension changed then resize your target texture to match with the WebCamTexture
.
Replace
tx2D.SetPixels32(rawImg);
tx2D.Apply();
with
//Fix texture dimension if it doesn't match with the web cam texture size
if ((tx2D.width != webCamTexture.width) || tx2D.height != webCamTexture.height)
{
tx2D = new Texture2D(webCamTexture.width, webCamTexture.height, TextureFormat.RGBA32, false, false);
Debug.Log("Fixed Texture dimension");
}
tx2D.SetPixels32(rawImg);
tx2D.Apply();
Upvotes: 4