Jenkins: Job DSL Plugin

After years of maintaining jobs using the regular Jenkins user interface, one thing stands out more than others. There is a lot of duplication in configuration, meaning that the time taken to modify jobs in response to things like library dependencies increases greatly, the more jobs you have.

As software developers, we strive for efficiency, accountability, and code reuse. Being able to apply this to the delivery pipeline as well as the development pipeline is very pleasing.

Enter the Job DSL Plugin. This uses Groovy scripts to describe a job in terms of triggers, build steps, publishers, and more. Anything you can do in the user interface, you can do with the DSL. What's even better is that even if the API doesn't specifically provide functions for what you need to do (e.g. CMake build steps), you can still describe the build steps and parameters you need, and just append them onto the document.

Even for someone with minimal Java experience (i.e. myself), the learning curve is not steep.

While I don't think it's particularly necessary to echo the existing example code from the DSL github page, it is definitely worth looking at this repository. It contains an example project demonstrating how to use gradle to build the DSL and run automated tests on it, prior to applying the configuration and generating jobs. This is a great little project to use to get started, if, like me, you don't know much about gradle and want to get up and running quickly.

As with these examples, your scripts should be checked into version control. This provides the complete change history of the job configurations, something that the Jenkins UI does not. It also means that your seed jobs can poll the repository containing your scripts, so your jobs get updated automatically when you commit.

Why is this useful?

From a productivity point of view, say you have

  • a large code base in a big repository (or multiple),
  • 10 projects using this code,
  • 10 jobs checking out the code they need to build the projects.

In this scenario, your code base is too large for it to be efficient to check out the entire thing for every job, so you configure the SCM step to check out the portions that each project needs. Sometimes, libraries depend on other libraries, so there is duplication between projects where some parts of the SCM are checked out for each job.

Now, you add a new library as a dependency of one that is shared by all 10 projects. You now need to manually update all 10 jobs to check out this extra repository.

This takes you n minutes1 per job, for a total time of 10n minutes.

If you generate your jobs with DSL, a part of the code that defines common libraries in the SCM might look like this:

def commonRepos = ['url/to/repo1', 'url/to/repo2'];

This commonRepos variable is then used by all jobs in their scm block (example using svn):

scm {
  svn {
    commonRepos.each { path ->
      location(path) {
        // credentials/checkout dir, etc...

Responding to your new library in this scenario is a single change, adding an element to the commonRepos array.

def commonRepos = ['url/to/repo1', 

This single change takes you only n minutes.

Now, what if you have 100 jobs rather than 10? Manually updating 100 jobs takes 100n minutes.

Or you could use DSL, where it still only takes you n minutes to make this change.

This seems like a fairly academic example, but this is the saving gained when I switched to using DSL for job configuration.

I also noticed that n is likely to be a lower value when working in a familiar text editor2, rather than using the Jenkins UI.

It's definitely worth the initial investment to configure jobs using code. The time spent learning the DSL is rewarded many times over when you don't have to waste time duplicating configuration effort in the future.

  1. n is more likely to be between 30 and 60 seconds depending on how fast you type and click.

  2. especially Emacs...

Jamie Femia

Read more posts by this author.