Reputation: 2854
In GitLab CI I am finding that I sometimes want to have multiple sets of rules to control different things, and I don't want to deal with how they would interact in a single set. Here's an example with workflow rules. First, here are some rules for naming my pipelines:
workflow:
name: '$PIPELINE_NAME'
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule" && $REBUILD_IMAGES == "true"'
PIPELINE_NAME: 'Scheduled CI image rebuild pipeline'
# Default
- PIPELINE_NAME: '$CI_COMMIT_MESSAGE'
And now here are some rules for setting some variables to help control when different docker images get used by the CI:
workflow:
rules:
- if: $CI_OPEN_MERGE_REQUESTS
variables:
IMAGE_SUFFIX: -test
- if: $CI_COMMIT_BRANCH
variables:
IMAGE_SUFFIX: -prod
Now, what if I want to do both of those things? I can't just jam the rules together like this:
workflow:
name: '$PIPELINE_NAME'
rules:
- if: '$CI_PIPELINE_SOURCE == "schedule" && $REBUILD_IMAGES == "true"'
PIPELINE_NAME: 'Scheduled CI image rebuild pipeline'
# Default
- PIPELINE_NAME: '$CI_COMMIT_MESSAGE'
- if: $CI_OPEN_MERGE_REQUESTS
variables:
IMAGE_SUFFIX: -test
- if: $CI_COMMIT_BRANCH
variables:
IMAGE_SUFFIX: -prod
because that definitely will not work correctly. If e.g. this is a scheduled image rebuild then GitLab will match the first rule and do nothing else, so it won't look at the rules that set the IMAGE_SUFFIX variable.
What I would really like to do is just define two separate rules blocks that should be evaluated independently. But I guess that is not possible because some rules could do conflicting things like set when: never
and exclude the pipeline from running, rather than do "orthogonal" things like my rules do.
I can of course come up with all the potential combinations of my rules and make it work, but that leads to combinatoric growth of the number of rules you have to define, which feels insane. It's not so bad with my example, but could quickly get out of control in more general settings.
So what am I "supposed" to do here? Is there an elegant solution, or is the insane combinatoric solution the only solution?
Edit: maybe I need some different approach? Like I guess I can create a .pre
job to run a script to define all these variables, and then pass them around with artifacts or something. Nothing I can think of feels like a great solution though...
Upvotes: 2
Views: 329
Reputation: 40901
One way you can do this is to take advantage of include:rules:
since you can define independent rules for each include that will all be evaluated. Each set of rules:
works the same as you would expect elsewhere, but the key here is that with this approach you can define multiple sets of rules.
This will allow you to write each set of include rules, more or less, without needing much consideration for the other rules -- or at least without the combinatoric insanity you would need to achieve the same effect in a single set of rules in workflow:rules:
alone.
include:
- local: path/to/open-mrs.yml
rules:
- if: $CI_OPEN_MERGE_REQUESTS
- local: path/to/non-default-branch.yml
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
- local: path/to/default-branch.yml
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# etc...
Thinking carefully about the order of your includes, the contents of each included file, and the merge details, you should be able to do what you want without needing to worry about one set of rules short-circuiting another (though, you may use to your advantage the fact that one include:
can override the effective contents of another include:
before it). All the usual rules:
capabilities (exists:
, changes:
, etc.) are available to you in this strategy.
This can be combined with workflow:rules:
, which will be evaluated after all the include:
s are processed.
include:
- # ...
- # ...
- # ...
workflow:
rules:
# through the logic of the includes the value of `$SOME_VARIABLE` is set/unset
# you can use the resulting value (or lack thereof) of such variables here
- if: $SOME_VARIABLE == "true"
There's a lot of creative ways you can use this strategy. You can also take a look at using inputs
which offers yet another degree of control/flexibility.
Beyond this, you might also consider something like dynamic child pipelines, which allow you to use a CI job to programmatically generate the pipeline yml contents for a child pipeline. It's a bit of an extreme measure, but it's the ultimate escape hatch for making sure you're able to define a pipeline exactly the way you want it every time, even if GitLab hasn't invented a way to easily define it otherwise.
Upvotes: 2