muchos_nachos
muchos_nachos

Reputation: 27

Authorization issues with autodesk forge when uploading files concurrently

I'm having trouble with the autodesk forge authorization. Occasionally I receive a 401 when calling oss/v2/buckets/{key}/objects/{object}. This only occurs infrequently, but worth mentioning is that one way I've been able to replicate this was when trying to upload two identical files concurrently from two different clients.

This scenario usually works, or to quote Brian Fantana -

60% of the time it works every time.

How do I solve this issue? Some guidance would be very helpful.

Thanks in advance.

Upvotes: 0

Views: 231

Answers (2)

Eason Kang
Eason Kang

Reputation: 7070

It's good to hear that you solve this issue by yourself. Although refreshing token on every upload could resolve this issue now, it's recommended to use buckets/:bucketKey/objects/:objectName/resumable to upload large files in chunks.

For large files, it's recommended to be separated into several small parts called the chunks in the official document, and uploaded by the buckets/:bucketKey/objects/:objectName/resumable API. Here is a C# sample of the Forge C# SDK for this API from my colleague:

private static dynamic resumableUploadFile()
{
  Console.WriteLine("*****Start uploading file to the OSS");
  string path = FILE_PATH;
  if (!File.Exists(path))
       path = @"..\..\..\" + FILE_PATH;

  //File Total size        
  long fileSize = new System.IO.FileInfo(path).Length;
  //Chunk size for separting file into several parts.
  //2MB chuck size is used in this sample.
  long chunkSize = 2 * 1024 * 1024 ;
  //Total amounts of chunks in 2MB size.
  long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);

  ApiResponse<dynamic> finalRes = null ;
  using (FileStream streamReader = new FileStream(path, FileMode.Open))
  {
    //Unique id for resumable uploading.
    string sessionId = RandomString(12);
    for (int i = 0; i < nbChunks; i++)
    {
        //Start position in bytes of a chunk
        long start = i * chunkSize;
        //End position in bytes of a chunk
        //(End posistion of the latest chuck is the total file size in bytes)
        long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;

        //Identify chunk info. to the Forge
        string range = "bytes " + start + "-" + end + "/" + fileSize;
        //Steam size for this chunk
        long length = end - start + 1;

        Console.WriteLine("Uploading range: " + range);

        //Read content stream into a meomery stream for this chunk
        byte[] buffer = new byte[length];
        MemoryStream memoryStream = new MemoryStream(buffer);

        int nb = streamReader.Read(buffer, 0, (int)length);
        memoryStream.Write(buffer, 0, nb);
        memoryStream.Position = 0;

        //Upload file to the Forge OSS Bucket
        ApiResponse<dynamic> response = objectsApi.UploadChunk(
                                            BUCKET_KEY,
                                            FILE_NAME,
                                            (int)length,
                                            range,
                                            sessionId,
                                            memoryStream
                                        );

        finalRes = response;

        if (response.StatusCode == 202) {
            Console.WriteLine("One chunk uploaded successfully");
            continue;
        }
        else if (response.StatusCode == 200)
        {
            Console.WriteLine("Final chunk uploaded successfully");
        }
        else
        {
            //Some error occurred here
            Console.WriteLine(response.StatusCode);
            break;
        } 
    } 

  }

  return (finalRes);
}

Hope this help.

Upvotes: 1

muchos_nachos
muchos_nachos

Reputation: 27

To solve this issue I had to change the expiration time for the token to always be refreshed on every upload.

Upvotes: 0

Related Questions