In this article I’ll try to cover almost every aspect of how we’re using Gitlabs CI system to accomplish our vision of a continuous integration architecture which perfectly fits our needs. Since Gitlab’s CI system is a newcomer in this area, it may not have the power of Jenkins or its competitors, but we think it does not have to hide.

How does Gitlab CI work?

Every repository have a so called .gitlab-ci.yml file in the root location of the project. This file contains the configuration for the whole CI process. You can separate your process into small jobs like compiling, code qualitychecking, building and testing. All those jobs inside a .gitlab-ci.yml file forms the so-called build pipeline.
Gitlab provides Runners (small piece of software), which were responsible for the execution of these build pipelines. You can install the Gitlab Runners for most system platforms and choose between different executor types (for example) the Shell or Docker executor. We definitely recommend to use the Docker executor because for every pipeline and for every job inside a pipeline the executor is starting a separated Docker container in which the job will be executed. Although this is not the fastest or the most efficient type, it has a big advantage. Every build is a new, clean and always reproducible procedure which will always result in exact the same output if the input doesn’t has changed. And this is a very significant aspect which absolutely outweighs the performance or effectiveness disadvantages.

But wait. How can you build a new Docker image while your pipeline gets executed within a Docker container? The trick is that Gitlab’s Runner is starting the Docker container in privileged mode and is mounting the Dockers socket. This gives a Docker container more access to the underlaying OS and allows the starting of a nested Docker container (this principle is called DIND “Docker in Docker” and is directly supported by Docker). The default Docker image which will be used by Gitlab’s Runner as execution environment is the official docker:dind image. But you’re free to choose any other image as your execution environment.

Let’s imagine a very simple .gitlab-ci.yml file. You can specify the node:9 Docker image as your execution environment and define a very simple list of commands which are executed directly on the shell inside your execution environment (inside the node:9 container). If your application inside your repository is a Node.js application, then you have almost reached your goal. Because Gitlab automatically clones the repository into your execution environment, you only have to call npm build and you will produce a so-called artifact (in the world of continuous integration an artifact means something which is produced while building (a binary for example)). Just before the Docker container (your execution environment) gets stopped and removed, Gitlab is identifying your artifacts (of course they must be defined in .gitlab-ci.yml file) and is uploading them to the project page for later downloading.

Real world problems

In the real world, almost no complex application is as simple constructed as standalone Node.js application without public and/or private dependencies, a backend or other crazy things.
Almost all of our (newest) applications are built with Angular 5 as frontend and Neos Flow (a modern PHP framework) as backend technology. Both frameworks are great, but without any third party or self developed libraries/ packages/ modules they quickly become unusable in complex scenarios. While installing public third party dependencies is very easy (thanks to composer and npm) it gets more difficult if you have private dependencies (and no private hosted registry…). It gets even harder if your vision of a nearly perfect continuous integration architecture requires, that every private and self developed dependency also has to be dynamically built at build time and gets pulled in (or use a cached artifact if no new commits arrived in the dependency repository). That’s the point where Gitlab’s CI system is getting weak (at least in CE version, we don’t know exactly how this situation will behave in EE)

While this was a real problem for us… and we think that a lot of other teams, independently of what language they use, have the same problems, we decided to develop a small set of tools which can solve all these problems. The idea this could be a small set of tools (just bash scripts) was incredible wrong. After about a few days it became so complicated, that we decided to start over again. And instead of using a preconfigured Docker image like Dockers DIND Image as execution environment and issue a lot of commands inside the .gitlab-ci.yml (which was anyway unmaintainable with > 50 projects), we decided to go one step bigger and invent our own execution environment. Because of the experience we had made before, we knew that we have to choose a solid and strong foundation for our set of tools. We quickly decided to choose Node.js because every of our colleagues has fundamental skills in JavaScript and it’s concepts. But while JavaScript has the flexibility we need, it’s not very famous for a bigger or complex codebase. That’s the reason why we wrote all of our build tools in TypeScript and transpile them to JavaScript before we built our executables with npm. And because we’re big fans or containerization of everything which can be containerized, of course we even containerized our build tools to use them as execution environment in our application builds.

Our Build Tools

Because their rich set of features would blow up this article, I’ll put them into it’s own. But I’ll give you a short idea of what our build tools are capable of.

tripip

…is the abbreviation of “trigger pipeline”. We may will rename it in the future 😉
Because Gitlab’s CI system (at least in CE) doesn’t support a real cross project pipeline, we created a little workaround. tripip is able to trigger a pipeline in another project, wait until it finished and download the artifact to the current pipeline. It can even detect if it is necessary to restart the pipeline or to download the previous artifact if no new commits are available.

redliub

…was written backwards and actually means “builder”. Again, we think about renaming it in the future 😉 What redliub does? It builds Docker images, creates a bunch of image labels (also Label Schema Org), takes care of advanced cache strategies around Gitlab’s CI cache system.

smoker

…we named it smoker because we use smoke tests to ensure the build was successully. smoker helps us to start a just built Docker image, put it into its own temporary network, wait until the container get healthy and than execute our application specific smoke tests.

pusher

…is one of the last tools in our chain and is finally responsible to push the built image to our private Docker registry (Gitlab Registry). Furthermore it’s also responsible for image tagging, caching and preparing the last step: deploying in Docker swarm.

deployer

…this tool still has to be coded. Actually its only a working proof of concept bash code which is able to deploy the Docker image to production.

 

After 6 months of active development, battle tests at production scale with about 50 projects, more than 6.000 runned pipelines and continuous improvements, we have decided to provide our production ready execution environment (our Docker image) for other teams and to open source the complete project during 2018. Before publication we will have to rework some small code sections and overhaul/ complete our documentation. If you were reading my post and getting really excited to test our build tools please don’t hesitate to contact me!

 

You can read more about them here: … has to be written…

 

 

 

Categories: AllgemeinDocker