Reputation: 490
I am trying to make my second stage run even though one of the two jobs in the first stage fails, but I cannot get it to work as expected with the job status check function succeeded('JobName')
.
In the following YAML pipeline, I would expect it to run Stage2 even though Job2 fails, as long as Job1 succeeds, but it does not:
stages:
- stage: Stage1
jobs:
- job: Job1
steps:
- pwsh: echo "Job1"
- job: Job2
steps:
- pwsh: write-error "Job2 error"
- stage: Stage2
condition: succeeded('Job1')
jobs:
- job: Job3
steps:
- pwsh: echo "Job3"
How do I get Stage2 to run even though Job2 has failed, as long as Job1 has succeeded?
Using always()
will make Stage2 run always, but I would like it to depend the success state of Job1, regardless of Job2 state.
Related documentation:
Upvotes: 5
Views: 12705
Reputation: 116
Looks like this is possible now. Example from Microsoft...
stages:
- stage: A
condition: false
jobs:
- job: A1
steps:
- script: echo Job A1
- stage: B
condition: in(dependencies.A.result, 'Succeeded', 'SucceededWithIssues', 'Canceled')
jobs:
- job: B1
steps:
- script: echo Job B1
Documentation: Stage to stage dependencies
Upvotes: 1
Reputation: 40711
It looks that this is not possible to handle job result on stage level of the next stage. However you may use this workaraound:
trigger: none
pool:
vmImage: ubuntu-latest
stages:
- stage: Stage1
jobs:
- job: Job1
steps:
- pwsh: echo "Job1"
- job: Job2
steps:
- pwsh: write-error "Job2 error"
- stage: Stage2
dependsOn: Stage1
condition: always()
jobs:
- job: Job3
condition: in(stageDependencies.Stage1.Job1.result, 'Succeeded')
steps:
- pwsh: echo "Job3"
- job: Job4
condition: in(stageDependencies.Stage1.result, 'Succeeded')
steps:
- pwsh: echo "Job4"
Documentation for this you have here.
Upvotes: 9