Bartosz
Bartosz

Reputation: 4786

Log the size of the artifacts archive before upload attempt in gitlab-ci

I have an on-prem instance of GitLab and I started seeing the following error in one of the pipelines:

ERROR: Uploading artifacts as “archive” to coordinator… 
413 Request Entity Too Large id=1390915 responseStatus=413 Request Entity Too Large status=413

Before I go to the admins and request that they increase the limit (as suggested here), I would like to see what is the size of the compressed artifact - perhaps I am packing too many things.

I am using:

How can I log the size of the package into the output?

Upvotes: 2

Views: 2855

Answers (2)

sytech
sytech

Reputation: 41041

On Windows, you can use gci and measure to get the size in bytes of a directory, using powershell.

So, in your GitLab job you could use the following method, assuming that .\dist is your artifact directory:

my_job:
  after_script:
    - powershell -c "gci -Recurse .\dist | Measure-Object -Property Length -sum"

You can omit the powershell -c and quotes if your runner uses powershell by default.

You'll see an output like:

Count    : 1234
Average  :
Sum      : 1234567890  # <--- this is the size in bytes
Minimum  :
Maximum  :
Property : Length

If you're using Linux images on a docker executor on Windows, then you can use the method mentioned in the other answer.

Unfortunately, it's not practical to get the compressed size of the artifact from your end. In part, because the compression algorithm is determined by the GitLab runner's settings. Additionally, some artifacts, like any under reports: are not compressed at all. GitLab also generates metadata files for each individual artifact, which will contribute to the overall storage size.

You can guess the approximate compressed size by first using gzip to compress your artifacts (as GoLang's compress/gzip is what is used by the runner) then using the above method. Though, I wouldn't expect compressed size to be significantly smaller unless your artifact data lends itself well to compression, so the uncompressed size should be reasonable for an Admin to use to increase your limits.

Keep in mind, you'll probably want to request a limit with some reasonable overhead in case your artifact size varies in the future.

Upvotes: 1

Tolis Gerodimos
Tolis Gerodimos

Reputation: 4400

A way to achieve this is by passing the path to the artifact to the du command.

  • du (disc usage) command estimates file_path space usage
  • The options -sh are (from man du):

 -s, --summarize
         display only a total for each argument

  -h, --human-readable
         print sizes in human readable format (e.g., 1K 234M 2G)

E.g

...
script:
...
  - du -hs path/to/artifacts
artifacts:
  paths:
    - path/to/artifacts

output:

40M     path/to/artifacts

Upvotes: 2

Related Questions