Power of CI/CD Pipelines

CI/CD in the DevOps Toolbox

Why CI/CD Pipelines?

CI/CD, or Continuous Integration and Continuous Delivery are two approaches to delivering code/features iteratively. The perceived benefits of CI/CD include faster request to feature times, and quicker feedback cycles for product.

Continuous integration is the process of constantly merging code into the master trunk. Continuous delivery is the constant deploying of tested code to production after merging to the master trunk. For CI/CD, it is ideal to have an automated process for building and testing code once a new feature has been checked in.

Here are 5 reasons CI/CD improves a team’s efforts:

Pros:

  1. Increase development visibility and collaboration when tests fail
  2. Reduce bugs and help enforce code quality standards
  3. Bring focus to deploy-ability
  4. Encourages testing as a first class citizen
  5. Reduces lead time so you get more done

Cons:

A CI/CD pipeline isn’t a silver bullet. Here are some noted Pitfalls or Roadblocks of CI/CD.

  1. Can be high effort activity for larger codebases
  2. Requires updates to codebase(s) to support tests (challenging for monoliths with no test harness)
  3. Requires written tests to be effective, and efficient (flaky tests can add pain)
  4. Half baked solutions risk unscheduled downtime
  5. Should be coupled with monitoring

CI/CD Pipelines typically have 3 primary stages. These are a build stage, a test stage, and a deploy stage. First, Code is built however you want to deliver it. This can mean that you build a container, or a package like a deb or rpm. Then, tests are run from or against the artifact that we built in the build stage. These can be unittests, or test-infra tests, or whatever testing model makes sense for your codebase. Lastly, we update our repository with the latest artifact if all tests passed successfully.

 

CI/CD Getting Started

CI/CD works best by modeling how you do things in an automated fashion. Standardizing on how you test and build your projects can be improve your life. Doing this helps cut down yak shaving, and other inefficient activity that comes with maintaining unique workflows for each project. It can be nice to abstract your stages into scripts or commands that run the same locally as they do on your build server. An example might be:

$ make test
$ # -or-
$ ./test.sh

It’s helpful to make these scripts idempotent so there is no danger imposed by rerunning them many times. Doing this should mean that they can run the same from the developer machine as it does on the build machine. It is also essential to use tests for all necessary business functionality.

 

Creating a CI/CD Pipeline:

Let’s ship an example right now. This 10-20 minutes walkthrough should get you started with your first pipeline. We can use gitlab.com for CI which hosts git repos that support pipelines automatically. You can sign up for an account here. We currently use our own GitLab service here at ModularSystems. Please leave any questions in the comments below.

Our repo will consist of two files, test.sh and .gitlab.yml. The test.sh file will contain whatever our test code is. Our .gitlab-ci.yml file is used to define our pipeline the Gitlab way. You can view the finished project at https://gitlab.modularsystems.io/ryan/power-of-pipelines

test.sh

#!/bin/bash
if [ ! -z EXIT_CODE ]; then
EXIT_CODE=0;
fi;
exit $EXIT_CODE;

This test will fail if it is ran in an environment where EXIT_CODE is not 0. We can test this by setting up a quick pipeline which that variable set.

.gitlab-ci.yml

variables:
  EXIT_CODE: 1

stages:
  - test

test:
  image: alpine:latest
  stage: test
  script:
    - bash test.sh

If we check in the two files above in, a pipeline will run automatically. Next, the pipeline will fail unless we’ve updated EXIT_CODE to 0. Once this variable is updated, we’ll instead get a passing build.

Running with gitlab-runner 10.1.0 (c1ecf97f)
  on c8ae07843885 (5a589b3b)
Using Docker executor with image alpine:latest ...
Using docker image sha256:033608cdd82d363adc3e87118e8a3be2e75a21bdc192bc111ed531ab7b1c23dd for predefined container...
Pulling docker image alpine:latest ...
Using docker image alpine:latest ID=sha256:053cde6e8953ebd834df8f6382e68be83adb39bfc063e40b0fc61b4b333938f1 for build container...
Running on runner-5a589b3b-project-4-concurrent-0 via fe809a5589e7...
Fetching changes...
HEAD is now at 73c1553 updated shell binary
From https://gitlab.modularsystems.io/ryan/power-of-pipelines
   73c1553..b5330a7  master     -> origin/master
Checking out b5330a79 as master...
Skipping Git submodules setup
$ sh test.sh
Job succeeded

Now you can rewrite test.sh to execute your test workflow.

 

Setting up Continuous Integration

Now that we have a testing stage, we need to use it to accept or deny changes into our master trunk. To do so, let’s create a new feature branch.

$ git clone
$ git checkout -b feature
$ vim test.sh #add a line that echos the exit code
$ git commit -am “checking in a feature for my merge request”; git push origin feature

Now that we’ve checked in our branch, we will navigate to Gitlab. From our project’s page, we will click on Merge Requests and create a new one. If you’re quick, you may see your tests running or that they have already passed/failed when you create your request. Here’s mine for example:

I hope the attentive and discerning developer I ask to review this sees that the build failed so that we don’t merge this into master until it’s fixed. Hey, it worked fine on my machine.

TL;DR

CI/CD pipelines are a great way to make folks happier. They can help remove friction by enforcing code standards, so your team’s only syntax squabble is tabs vs spaces. They can catch silly typos and other errors so that your dev/ops don’t have to. You should probably use them.

Notes

This is the beginning of a series on CI/CD. We will update this article to future links.

If you have any comments or questions, we want to hear from you. Please let’s us know what you think in the comments below.