Red Ant
Red Ant

Reputation: 375

how to configure jenkins pipeline project with no SCM option

We are migrating our source code repository to cloud bucket and all the source code that jenkins uses will be read/downloaded from the bucket like S3.

This also involves rewriting our jenkins pipeline that reads from SCM (git). The Jenkis pipeline project configuration dosen'allow any independent script execution (say weget or download file from bucket using shell)

I would like to do following ,if possibe 1) Download the jenkinsfile from S3 bucket to workspace 2) Choose None for SCM in Pipeline section 3) Give path to downloaded jenkinsfile in script path

My question is , how can I make #1 possible?. Image attached. enter image description here

Upvotes: 8

Views: 1759

Answers (1)

Noam Helmer
Noam Helmer

Reputation: 6824

We have a similar situation in which we needed to allow the execution of dynamically generated Jenkins pipelines which are stored in S3, and cannot be retrieved using the regular SCM options.

To solve it we wrapped the following approach in a function:

  1. Download the relevant Jenkinsfile from S3
  2. Load the File as text into a variable
  3. Use the Groovy Evaluate function to run the pipeline.

In terms of code, we have a shared library function called runPipelineFromS3 which looks something like:

/**
 * Download and run the given Jenkinsfile from S3
 *
 * @param path (Mandatory) the path to the Jenkinsfile within the S3 bucket.
 * @param bucket (optional) the bucket in which to search for the Jenkinsfile. If not given DEFAULT_BUCKET will be used.
 */
def call(String path, String bucket = 'DEFAULT_BUCKET'){
    s3Download bucket: bucket, path: path, file: 'Jenkinsfile'
    def content = readFile file: 'Jenkinsfile'
    evaluate content
}

/**
 * Download and run the given Jenkinsfile from S3
 *
 * @param params A map of key values parameters to be used, supports the following parameters:
 * String path (Mandatory) - the path to the Jenkinsfile within the S3 bucket.
 * String bucket (optional) the bucket in which to search for the JJenkinsfile. If not given DEFAULT_BUCKET will be used.
 */
def call(Map params){
    assert params.path : "The path parameter cannot be empty"
    call(params.path.toString(), params.bucket ? params.bucket.toString() : 'DEFAULT_BUCKET')
}

Then in the pipeline itself the usage is quite simple, all you need is to choose Pipeline script as the definition, and the pipeline script code will look like:

library 'my-shared-library'        // Load the shard library
runPipelineFromS3("PATH_TO_FILE")  // downland and run the Jenkinsfile

You can also use job parameters to make it more generic:

library 'my-shared-library$LIBS_BRANCH'    // Load the shard library from a specific branch
runPipelineFromS3(PATH_TO_FILE, S3_BUCKET)  // downland and run the Jenkinsfile based on job input parameters

enter image description here

The shared lib function can be modified to support other execution parameters according to your needs, and the library can be loaded globally, thus avoiding the need to call it in each job and making this solution a single line call for each job.
But the basic idea is to use you own logic instead of the SCM one to retrieve and run the Jenkisfile via your own script - and better to have the functionality encapsulated in a shared library to allow easy future modifications across all jobs - without the need to reconfigure each job individually.

In the end, especially if loading the library globally and using a constant default bucket, for each job you configure only the unique path to the Jenkisfile, which is a similar behavior to what you requested with the the SCM configuration block.

Upvotes: 3

Related Questions