Mr Bell
Mr Bell

Reputation: 9338

How to use scene camera with Agora.io in Unity

In Unity I have integrated Agora.io such that from within my virtual reality app, i can connect a video call to an outside user on a webpage. The VR user can see the website user, but the website user cannot see the VR user because there is no available physical camera to use. Is there a way to use a scene camera for the Agora video feed? This would mean that the website user would be able to see into the VR user's world

Upvotes: 2

Views: 1544

Answers (3)

b00dle
b00dle

Reputation: 1528

I'd like to refresh the knowledge here, because the accepted answer refers to an outdated API. That said, you may still largely follow the steps mentioned by @Rick Cheng, as long as you keep the following in mind:

  1. The signature to SetExternalVideoSource has changed.

For streaming (render) textures it will be sufficient to use mRtcEngine.SetExternalVideoSource(true, false, EXTERNAL_VIDEO_SOURCE_TYPE.VIDEO_FRAME, new SenderOptions());.

  1. The raw byte copying process varies between Unity versions.
#if UNITY_2018_1_OR_NEWER
    NativeArray<byte> nativeByteArray = _texture.GetRawTextureData<byte>();
    if (_shareData?.Length != nativeByteArray.Length)
    {
        _shareData = new byte[nativeByteArray.Length];
    }
    nativeByteArray.CopyTo(_shareData);
#else
    _shareData = _texture.GetRawTextureData();
#endif

Even the screen sharing tutorial on the agora blog is outdated. There is a discussion about the aforementioned points under this gist.


For up-to-date examples and usage, I recommend checking out the code samples shipped with the Unity SDK, specifically the advanced ones (see <Assets>/Agora-RTC-Plugin/API-Example/Examples/Advanced). I dug through CustomCaptureVideo to figure out how to push custom frames.

Upvotes: 0

I modified the sharescreen code Agora io edited to extract a render texture. The problem is I only get a white or black screen on the receiver while my render texture is a depth cam video flow.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using agora_gaming_rtc;
using UnityEngine.UI;
using System.Globalization;
using System.Runtime.InteropServices;
using System;
public class ShareScreen : MonoBehaviour
{
    Texture2D mTexture;
    Rect mRect;
    [SerializeField]
    private string appId = "Your_AppID";
    [SerializeField]
    private string channelName = "agora";
    public IRtcEngine mRtcEngine;
    int i = 100;
    public RenderTexture depthMap;
    void Start()
    {
        Debug.Log("ScreenShare Activated");
        mRtcEngine = IRtcEngine.getEngine(appId);

        mRtcEngine.SetLogFilter(LOG_FILTER.DEBUG | LOG_FILTER.INFO | LOG_FILTER.WARNING | LOG_FILTER.ERROR | LOG_FILTER.CRITICAL);

    mRtcEngine.SetParameters("{\"rtc.log_filter\": 65535}");

    mRtcEngine.SetExternalVideoSource(true, false);

    mRtcEngine.EnableVideo();

    mRtcEngine.EnableVideoObserver();

    mRtcEngine.JoinChannel(channelName, null, 0);

    mRect = new Rect(0, 0, depthMap.width, depthMap.height); 

    mTexture = new Texture2D((int)mRect.width, (int)mRect.height, TextureFormat.RGBA32, false);
}
void Update()
{
    //Start the screenshare Coroutine
    StartCoroutine(shareScreen());
}
//Screen Share
IEnumerator shareScreen()
{
    yield return new WaitForEndOfFrame();
    //FB activate automaticaly the render texture for the copy
    RenderTexture.active = depthMap;
    //Read the Pixels inside the Rectangle
    mTexture.ReadPixels(mRect, 0, 0);
    //Apply the Pixels read from the rectangle to the texture
    mTexture.Apply();


    // Get the Raw Texture data from the the from the texture and apply it to an array of bytes
    byte[] bytes = mTexture.GetRawTextureData();
    // Make enough space for the bytes array
    int size = Marshal.SizeOf(bytes[0]) * bytes.Length;
    // Check to see if there is an engine instance already created
    IRtcEngine rtc = IRtcEngine.QueryEngine();
    //if the engine is present
    if (rtc != null)
    {
        //Create a new external video frame
        ExternalVideoFrame externalVideoFrame = new ExternalVideoFrame();
        //Set the buffer type of the video frame
        externalVideoFrame.type = ExternalVideoFrame.VIDEO_BUFFER_TYPE.VIDEO_BUFFER_RAW_DATA;
        // Set the video pixel format
        externalVideoFrame.format = ExternalVideoFrame.VIDEO_PIXEL_FORMAT.VIDEO_PIXEL_BGRA;
        //apply raw data you are pulling from the rectangle you created earlier to the video frame
        externalVideoFrame.buffer = bytes;
        //Set the width of the video frame (in pixels)
        externalVideoFrame.stride = (int)mRect.width;
        //Set the height of the video frame
        externalVideoFrame.height = (int)mRect.height;
        //Remove pixels from the sides of the frame
        externalVideoFrame.cropLeft = 0;
        externalVideoFrame.cropTop = 0;
        externalVideoFrame.cropRight = 0;
        externalVideoFrame.cropBottom = 0;
        //Rotate the video frame (0, 90, 180, or 270)
        externalVideoFrame.rotation = 180;
        // increment i with the video timestamp
        externalVideoFrame.timestamp = i++;
        //Push the external video frame with the frame we just created
        int a = rtc.PushVideoFrame(externalVideoFrame);
        Debug.Log(" pushVideoFrame =       " + a);
    }
}

}

Upvotes: 0

Rick Cheng
Rick Cheng

Reputation: 654

Yes. Although I haven't done projects in VR before, but the concept should be there. You may use the External Video Source to send any frames of the video as if it is sent from the physical camera. For Scene cameras, you may use a RenderTexture to output the camera feed, and extract the raw data from the RenderTexture. So the steps are:

  1. Set up your camera to output to a RenderTexture (plus logic to display this RenderTexture somewhere locally if needed.)
  2. Also make sure when you set up the Agora RTC engine, enable external video source using this call:

    mRtcEngine.SetExternalVideoSource(true, false);

  3. At each frame, extract the raw image data from the RenderTexture

  4. Send the raw frame data to the SDK function rtc.pushVideoFrame()

You may find the code for the last step here https://gist.github.com/icywind/92053d0983e713515c64d5c532ebee21

Upvotes: 1

Related Questions