Patrice Cote
Patrice Cote

Reputation: 3716

Google Cloud Storage node client ResumableUploadError

We have an app that's running in GCP, in Kubernetes services. The backend is inside a container running a node/alpine base image. We try to use the nodejs client library for Google Cloud Storage (@google-cloud/storage": "~2.0.3") to update file to our bucket like in the github repo samples :

storage.bucket(bucketName)
                .upload(path.join(sourcePath, filename),
                    {
                        'gzip': true,
                        'metadata': {
                            'cacheControl': 'public, max-age=31536000',
                        },
                    }, (err) => {
                        if (err) {
                            return reject(err);
                        }

                        return resolve(true);
                    });
        });

It works fine for files smaller than 5Mb, but when I get higher size files, I get an error :

{"name":"ResumableUploadError"}

A few google searches later, I see that the client automaticaly switch to resumable upload. Unfortunately, I cannot find any example on how to manage this special cases with the node client. We want to allow up to 50Mb so it's a bit of a concern right now.

Upvotes: 1

Views: 970

Answers (3)

khuang834
khuang834

Reputation: 971

If you still want to have resumable upload and you don't want to have to create additional bespoke directories in Dockerfile, here is another solution.

Resumable upload requires a writable directory to be accessible. Depending on the os and how you installed @google-cloud/storage, the default config path could change. To make sure that this always works, without having to create specific directories in your Dockerfile, you can specify the configPath to a writable file.

Here's an example of what you can do. Be sure to point configPath to a file not a existing directory (otherwise you'll get Error: EISDIR: illegal operation on a directory, read)

gcsBucket.upload(
  `${filePath}`,
  {
    destination: `${filePath}`,
    configPath: `${writableDirectory}/.config`,
    resumable: true
  }
)

Upvotes: 0

Alternatively you can set resumable: false in the options you pass in. So the complete code would look like this:

storage.bucket(bucketName)
                .upload(path.join(sourcePath, filename),
                    {
                        'resumable': false,
                        'gzip': true, 
                        'metadata': {
                            'cacheControl': 'public, max-age=31536000',
                        },
                    }, (err) => {
                        if (err) {
                            return reject(err);
                        }

                        return resolve(true);
                    });
        });

Upvotes: 0

Patrice Cote
Patrice Cote

Reputation: 3716

OK, just so you know the problem was because my container runs the node/alpine image. The alpine distributions are stripped to the minimum so there was no ~/.config folder which is used by the Configstore library used by the google-cloud/storage node library. I had to go in the repo check the code and saw the comment in file.ts Once I added the folder in the container (by adding RUN mkdir ~/.config in Dockerfile) everything started to work as intended.

Upvotes: 2

Related Questions