Marko Grdinić
Marko Grdinić

Reputation: 4062

Is there a way to have the .NET GC manage Cuda memory?

I am working on a language that compiles to both F# and Cuda. While I have no problem with memory management for .NET objects, the Cuda memory falls into the unmanaged part of it and needs to be handled manually.

The only real regret with the language that I have now is how much more complicated the current lexically scoped way of managing memory makes the ordeal of writing a ML library for it. It couples the code to an uncomfortable degree and forces me to CPS the codebase in order to get some handle on it. This region based kind of memory management that I have right now is only a partial solution and I'd much prefer it if some parts of the allocation could be handled by the GC.

Do I have any options for doing this without resorting to ditching .NET as a platform and writing my own runtime for the language?

Upvotes: 2

Views: 126

Answers (1)

msedi
msedi

Reputation: 1733

We did this by wrapping all CUDA memory in managed wrapper classes (in C#, not in Managed C++) and added a SafeHandle to it. The classes have their own dispose, but the SafeHandle will take care of the real disposal. By the way, the question is, if you are using the driver API or the runtime API. Because then the examples below would differ a bit.

Just to give you a clue:

    /// <summary>
    /// Abstract base class for all CUDA memories (linear, pitched, array, surface).
    /// </summary>
    public abstract class CudaMemory : CudaDeviceObject, ICudaMemory
    {
        #region IArray

        /// <summary>
        ///     Dimension of Array
        /// </summary>
        int[] IArray.Dim
        {
            get { return new[] { Width, Height, Depth }; }
        }

        #endregion

        #region ICudaMemory

        /// <summary>
        /// Returns the memory type.
        /// </summary>
        public abstract CudaMemoryType MemoryType
        {
            get;
        }

        #endregion

        #region CudaDeviceObject

        /// <summary>
        /// Holds the pointer to the safe handle
        /// </summary>
        protected internal CudaSafeHandle myDevicePtr;

        /// <summary>
        /// Holds the device pointer to the device memory.
        /// </summary>
        public override SafeHandle Handle => myDevicePtr;
.
.
.

Since CUDA has many distinct handle for textures, array, memory, pitched memories, surfaces, etc. and also "destroy" methods we need to create several SafeHandles.

The SafeHandle for an array looks like this.

    /// <summary>
    /// SafeHandle to control the lifetime of the Cuda context.
    /// </summary>
    public sealed class CudaSafeArrayHandle : CudaSafeHandle
    {
        public CudaSafeArrayHandle() : base( true )
        {
        }

        protected override bool ReleaseHandle()
        {
            try
            {
                CUDA.Assert(CUDADriverAPI.cuArrayDestroy(DangerousGetHandle()));
                return true;
            }
            catch
            {
                return false;
            }
        }
    }

The SafeHandle for a pitched memory looks like this:

    /// <summary>
    /// SafeHandle to control the lifetime of the Cuda context.
    /// </summary>
    public  class CudaSafeDevicePtrLinearMemoryHandle : CudaSafeDevicePtrHandle
    {
        public CudaSafeDevicePtrLinearMemoryHandle() : base(true)
        {
        }

        protected override bool ReleaseHandle()
        {
            try
            {
                CUDA.Assert(CUDADriverAPI.cuMemFree(DangerousGetHandle()));
                return true;
            }
            catch
            {
                return false;
            }
        }
    }

Upvotes: 4

Related Questions