CI/CD on Containers

Ein Blogeintrag von Dominik

Within CodeCamp:N we develop digital service products for the financial and insurance sector. Our goal is to build the future out of 0 and 1. In order to insure, that we will always be able to further follow this goal, we need developers. When we started CodeCamp:N, we started with 2 “developers”. Today we employ around 20 – plus 10 developers from external partners on- and offshore. This enormous growth requires change – not only change in organisational stuff, but also change in our IT-stuff.

The goal of this post is to provide you with insights on how our CI/CD and code management infrastructure changed from 2017 and 2 developers to 2019 and 30 developers – maybe they are helpful to you!


Back in 2017 - the early days

When we started CodeCamp:N in 2017 our tech department effectively consisted out of two full time “developers”. I use these quotation marks because none of us has been a real software development specialist. Nevertheless, we have set ourselves one goal: build a state-of-the-art CI/CD pipeline for cloud-native applications on container-based infrastructures – at least that is what we dreamed of.
Our previous knowledge was manageable: I had some first experiences with Docker and I tried Jenkins in my previous job. My co-worker had some first touchpoints on AWS. And we both were experienced Git users.
Hence, we decided our first step is to spin up an EC2 instance in AWS. Due to some of our early experiences, I decided to rely on Rancher 1.6 for container orchestration and to have a graphical interface for container management.
Additionally, we built up a customized Jenkins container that was able to build Java, JavaScript (npm), Python and Ruby on Rails projects. These were the stacks for our projects. What is also noteworthy to say, is that we configured Jenkins jobs solely via the graphical interface and we ran all jobs directly in the master instance. On the same EC2 host, we ran a Gitlab CE container, including a packaged Postgres database and a Redis cache, as well as a private Docker registry.
What might also be interesting to you: the monthly costs for the EC2 Instance (t2.large) and the other EC2 Instance (t2.medium), which we used for development and testing environments of our products, did not exceed 50 €.

Today in 2019 - CodeCamp:N grew (up)

With more developers joining our crew, the performance metrics for the running Gitlab CE container could no longer meet our expectations. This is why we needed to spin up a second t2.large instance solely for the Gitlab container.
One thing we logically wanted to prevent was that the quality of our codes suffers due to different and many people working on them. We wanted to make sure that each and every line of code going into production meets the same quality criteria. Hence, we defined a template for our Jenkins jobs that every developer needed to follow:

  • Build the code
  • Run the unit tests
  • Build the Docker image
  • Push Docker image to Docker registry

For Java based backends, the job basically comprises the following commands:

cd frontend/site   
git submodule status   
yarn install   
yarn run build:devbuild   
   
echo "PACKAGE AND UPLOAD TO S3 ..."   
   
# S3 bucket names   
BUILD_ARTIFACTS_BUCKET="..."   
SPA_APP_BUCKET="..."   
CLOUDFRONT_DISTRIBUTION="..."   
   
pip install --user awscli   
PATH=/var/jenkins_home/.local/bin:$PATH   
#python --version   
#aws --version   
#printenv   
   
cd frontend/site/dist   
BUNDLE_NAME="dist_${GIT_COMMIT}.zip"   
BUNDLE="../dist_${GIT_COMMIT}.zip"   
   
echo "create zip ${BUNDLE}"   
zip -r ${BUNDLE} .   
   
aws s3 cp ${BUNDLE} s3://${BUILD_ARTIFACTS_BUCKET}/   
   
#==================================================   
   
echo "DOWNLOAD, DEPLOY AND INVALIDATE_CACHES ..."   
   
aws s3 cp s3://${BUILD_ARTIFACTS_BUCKET}/${BUNDLE_NAME} .   
unzip -o ${BUNDLE_NAME}   
rm ${BUNDLE_NAME}   
   
aws s3 sync . s3://${SPA_APP_BUCKET} \   
    --acl public-read   
   
aws cloudfront create-invalidation \   
    --distribution-id ${CLOUDFRONT_DISTRIBUTION} \   
    --paths "/*"   

Again: monthly costs grew to around 100 € for the three running instances.

And another thing: first issues with OS updates of the EC2 hosts came up.

Things got mature – new services in the pipeline

New crew members mean new demands. The first senior software developer that joined us asked for our software component management solution – we had none. He also asked for our Sonar instance for test coverage information – we had none. We also started to create more mature software products and asked ourselves – what about security? Because again: we had none (at the moment!).
Hence, we needed at least three new services in our CI/CD pipeline:

It was possible to run all of these services on the two existing EC2 Instances.

With these new services up and running, we needed to adjust and expand our template for the Jenkins jobs.

  • Build the code
  • Run the unit tests
  • Scan the code using Sonarqube
  • Publish artifact in Nexus repository
  • Build the Docker image
  • Push Docker image to Docker registry included in Nexus

So, that was our way from 2017 to now, 2019. I hope you liked it and that these insights can somehow help you.