Earlier this year I helped shoot a video with Red Hat about CI/CD on OpenShift. This video covers three key topics: automating CI/CD configuration, using a git repository for common CI/CD artifacts, and parameterizing Jenkins pipelines. We viewed these topics as a few of the best practices for running CI/CD infrastructure on OpenShift.
Terminology
First things first, let’s define a few terms. CI/CD stands for “Continuous Integration/Continuous Deployment”. This is a practice that allows teams to quickly and automatically test, package, and deploy their applications. It is often achieved by leveraging a server called Jenkins, which serves as the CI/CD orchestrator. Jenkins listens to specific inputs (often times a git hook following a code check-in) and when triggered will kick off a pipeline.
A pipeline consists of code written by development and/or operations teams that instructs Jenkins which actions to take during the CI/CD process. This pipeline is often something like “build my code, then test my code, and if those tests pass, deploy my application to the next highest environment (usually either a development, test, or production environment)”. Organizations often have more complex pipelines, incorporating tools such as artifact repositories and code analyzers, but this provides a high-level example.
Finally, seeing as this post is about CI/CD best practices on OpenShift, OpenShift is Red Hat’s platform-as-a-service (PaaS). It provides excellent integration with Jenkins to make incorporating CI/CD as simple as possible.
Now that we have an understanding of the key terminology, let’s dive into some best practices.
Automation is Key
In order to run CI/CD on OpenShift, you need to have the proper infrastructure configured on the cluster. “Hello, World” implementations of this are quite simple to achieve. Simply run oc new-app jenkins-<persistent/ephemeral> and voilà, you have a running Jenkins server ready to go. Use cases in the enterprise, however, are much more complex. In addition to the Jenkins server, admins will often need to deploy a code analysis tool such as SonarQube and an artifact repository such as Nexus. They will then have to create pipelines to perform CI/CD and will have to create Jenkins slaves to reduce the load on the master. Most of these entities are backed by OpenShift resources which need to be created in order to deploy the desired CI/CD infrastructure.
Eventually, the manual steps required to deploy all of your CI/CD components may need to be replicated, and it may not be you that will perform these steps. In order to ensure that the outcome is produced quickly, error-free, and exactly as it was before, a method of automation should be incorporated in the way your infrastructure is created. This can be an Ansible playbook, a bash script, or any other way you would like to automate the deployment of CI/CD infrastructure. In the past I have used Ansible and the OpenShift-Applier role to automate my implementations. You may find these tools to be valuable as well, or you may find that something else works better for you and your organization. Either way, you’ll find automation to significantly reduce the workload required to recreate CI/CD components.
Configuring the Jenkins Master
Outside of general “automation”, I’d like to single out the Jenkins master and talk about a few ways admins can take advantage of OpenShift to automate Jenkins configuration. The Jenkins image from the Red Hat Container Catalog comes packaged with the OpenShift-Sync plugin installed. In the video we discussed how this plugin can be used to create Jenkins pipelines and slaves.
To create a Jenkins pipeline, create an OpenShift BuildConfig similar to below:
apiVersion: v1
kind: BuildConfig
...
spec:
source:
git:
ref: master
uri: <repository-uri>
...
strategy:
jenkinsPipelineStrategy:
jenkinsfilePath: Jenkinsfile
type: JenkinsPipeline
The OpenShift-Sync plugin will notice that a BuildConfig with the strategy jenkinsPipelineStrategy has been created and will convert it into a Jenkins pipeline, pulling from the Jenkinsfile specified by the git source. An inline Jenkinsfile can also be used instead of pulling from one from a git repository. See here for more information.
To create a Jenkins slave, create an OpenShift ImageStream that starts with the following definition:
apiVersion: v1
kind: ImageStream
metadata:
annotations:
slave-label: jenkins-slave
labels:
role: jenkins-slave
...
Notice the metadata defined in this ImageStream. The OpenShift-Sync plugin will convert any ImageStream with the label role: jenkins-slave into a Jenkins slave. The Jenkins slave will be named after the value from the slave-label annotation.
ImageStreams work just fine for simple Jenkins slave configurations, but some teams will find it necessary to configure nitty-gritty details such as resource limits, readiness and liveness probes, and instance caps. This is where ConfigMaps come into play:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
role: jenkins-slave
...
data:
template1: |-
<Kubernetes pod template>
Notice that the “role: jenkins-slave” label is still required to convert the ConfigMap into a Jenkins slave. The Kubernetes pod template consists of a lengthy bit of XML which will configure every last detail to your organization’s liking. To view this XML, as well as more information on converting ImageStreams and ConfigMaps into Jenkins slaves, see the documentation here.
Notice with the three examples shown here that none of these operations required an administrator to make manual changes into the Jenkins console. By using OpenShift resources, Jenkins can be configured in a way that is easily automated.
Sharing is Caring
The second key point we talked about in the video was maintaining a git repository of common CI/CD artifacts. The main idea here is to prevent teams from reinventing the wheel. Imagine that your team needs to perform a blue/green deployment to an OpenShift environment as part of the CD phase of the pipeline. The members of your team responsible for writing the pipeline may not be OpenShift experts, nor may they have the bandwidth to write this functionality from scratch. Luckily, somebody has already written a function that incorporates that functionality for you in a common CI/CD repository, so your team can use that function instead of spending time writing their own.
To take this one step further, your organization may decide to maintain entire pipelines. You may find that teams are writing pipelines with similar functionality. It would be more efficient for those teams to use a parameterized pipeline from a common repository as opposed to writing their own from scratch.
Less is More
I hinted at the third and final point in the previous section, which is to parameterize your CI/CD pipelines. Parameterization will prevent an over-abundance of pipelines in the future, making your CI/CD system easier to maintain. Imagine I have multiple regions I can deploy my application to. Without parameterization, I would need a separate pipeline for each region.
To parameterize a pipeline written as an OpenShift build config, add the “env” stanza to the configuration:
...
spec:
...
strategy:
jenkinsPipelineStrategy:
env:
- name: REGION
value: US-West
jenkinsfilePath: Jenkinsfile
type: JenkinsPipeline
With this configuration, I can pass the REGION parameter the pipeline to deploy my application to the specified region.
The example given in the video provides a more substantial case where parameterization is a must. Some organizations decide to split up their CI/CD pipelines into two separate CI and CD pipelines, usually because there is some sort of approval process that happens before the deployment. Imagine that I have four images and three different environments to deploy to. Without parameterization, I would need 12 CD pipelines to allow for all deployment possibilities. This can get out of hand very quickly. To make maintenance of the CD pipeline easier, organizations would find it better to parameterize the image and environment to allow one pipeline to perform the work of many.
Summary
CI/CD at the enterprise level tends to become more complex than many organizations anticipate. Luckily, with Jenkins there are many ways to seamlessly integrate with OpenShift to provide automation of your setup. Maintaining a git repository of common CI/CD artifacts will also ease the effort, as teams can pull from maintained dependencies instead of writing their own from scratch. Finally, parameterization of your CI/CD pipelines will reduce the amount of pipelines that will have to be maintained.
Of course, there are more “best practices” than those outlined here and in the video, but we identified these as some of the most important. Thanks for reading!