PatS
PatS

Reputation: 11484

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.

I tried adding this:

stages:
    - logger

logger-commands:
    stage: logger
    allow_failure: true
    script:
        - echo 'Examining environment'
        - echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
        - git --version
        - echo --------------------------------------------------------------------------------
        - env
        - echo --------------------------------------------------------------------------------
        - npm --version
        - node --version
        - echo java -version
        - mvn --version
        - kanico --version
        - echo -------------------------------------------------------------------------------- 

The problem is that the Java command is failing because java isn't installed. The error says:

/bin/sh: eval: line 217: java: not found

I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.

Searching for the above solution got me close.

./script_that_fails.sh > /dev/null 2>&1 || FAILED=true

if [ $FAILED ]
    then ./do_something.sh
fi

So that is helpful, but my question is this.

Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?

        - npm --version || echo nmp failed
        - node --version  || echo node failed
        - echo java -version || echo java failed

That is a little cleaner (syntax) but I'm trying to make it simpler.

Upvotes: 9

Views: 15127

Answers (1)

PatS
PatS

Reputation: 11484

The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.

If the command did fail, the command is printed along with the non-zero exit code.

# File: runit

#!/bin/sh
"$@"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
    echo "CMD: $@"
    echo "Ignored exit code ($EXITCODE)"
fi
exit 0

Testing it as follows:

./runit ls "/bad dir"
echo "ExitCode = $?"

Gives this output:

ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0

Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.

To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,

stages:
  - logger-safe

logger-safe-commands:
  stage: logger-safe
  allow_failure: true
  script:
    - ./runit npm --version
    - ./runit java -version
    - ./runit mvn --version

I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:

    - some_command || echo command failed $?

Upvotes: 9

Related Questions