Reputation: 1
Let me preface this by mentioning that we are using on-premise GitLab with our own runners, not http://www.gitlab.com. With our GitLab CI, we commonly do multi-stage jobs where jobs depend on stuff done in previous jobs - say we have:
job1:
script:
- composer install
job2: # run static checks on project
job3: # build docker images from stuff installed in job1
job4: # run phpunit
Now each job needs the dependencies installed by job1
, which in some cases can be considerable; in some cases approaching a gigabyte of data spread over tens of thousands of files.
I know I could use artifacts
to hand these over between jobs, but that seems like an insane waste for practically no use - we don't have a fleet of thousands of runners, but just a handful, so it would be much more economical to just run all jobs on the same runner.
I read up on that, and while there is a mechanism via tags to run all jobs on a SPECIFIC runner, I found only other people scratching their heads when the requirement is "run on any random available runner, but then run the rest of the pipeline on the same runner". We don't want to restrict our jobs to any particular runner, we just don't want different runners used in a specific pipeline instance.
Does that make sense ? We were surprised that this is not a common and covered use case, is there really no way to achieve that ?
Upvotes: 0
Views: 26