If you’ve ever had the opportunity to be behind the scenes of a real live show, you know that a general rehearsal is a step no director would pass on. There’s no way to recreate that feeling of a real-life 500-seat theater without actually running the act in full, before the anxious crowd enters.

This simple show-business protocol is the exact equivalent of Continuous Integration (CI) in the software domain. Any application that does more than ‘hello world’ needs testing since any change can affect one of the many edge cases it was intended to serve. But a real complex application needs even more intricate evaluations steps. It may require interaction with other applications, and in some cases may even act as it was expected to and still take part in an unexpected faulty chain of events.

Continuous integration pipelines have revolutionized software quality by automating the way complex software systems are built and tested. They set the stage for a general rehearsal where your code is built and tested in a repeatable manner that is just close enough to how it will have to perform in production.

A good CI/CD framework will speed up you development cycles by reducing manual checks and ad-hoc setups of complex system-wide tests, it will make sure to keep your users’ experience in constant improvement by making sure no current capabilities were degraded by a new feature you’ve just released, and it will reduce downtimes by stopping critical bugs at the gate.

The traditional approach to CI/CD

Continuous integration was originally represented by a centralized "single pipeline" build/deploy approach. Its core premise is that the same system that "builds" an application in response to source code changes (the "CI" part of CI/CD) should be the one that "deploys" that change to test, staging and production systems (the "CD" part).  

The solutions taking this approach usually have a control node (server) that executes pipelines and steps centrally, or orchestrates jobs running in other nodes. The logic that defines the different pipelines and procedures that each step is responsible for are maintained and stored in the control node. Many of the most prevalent build systems in use today, such as CircleCI, Travis, Jenkins and others, take this approach.

It's easy to see why this approach is attractive: the process is often defined and orchestrated in a single system, in a single (or at least readily traversable) pipeline of steps, and you can generally be confident that the artifact that was built from your source code is the one that gets deployed to the running system. It also allowed devops / dev teams a comfortable interface to define their specific needs from their CI/CD which is out-of-band from the code on which they were working.

A push to a branch in a Git repo would trigger a build, a creation of a new image in a container registry, and finally a deployment to the appropriate infrastructure for the lifecycle stage (dev, test, staging or production).

From a side-show to the main event

While the software development lifecycle has been well automated by the continuous integration platforms that dominated the market, infrastructure has remained a largely manual process that usually required the involvement of separate teams. With the wide spectrum of infrastructure demands made by today’s development teams, it has become increasingly crucial to implement infrastructure automation.

Moreover,  the infrastructure used to run your applications is not managed in the same way as the assets of the application itself. Changes to the operation of an application that require corresponding infrastructure changes (such as the implementation of blue-green or canary deployments, changes to load balancers, and so on) require coordination between these two separate teams and a shared understanding of the goal.

Application assets are stored in a version control system such as Git, so there's a clear history of how code moved through the development process and into the "main" branch. 

With infrastructure, however, that level of declarative control may or may not exist, and it's almost certainly not tied to application deployment.  This means that complex deployments require both development team action to merge the code to "main" and invoke the deployment process AND infrastructure team action to prepare the new infrastructure, set up traffic management rules, and so on. When things go well then life is good, but when things go wrong they can go very, very wrong indeed.

That's where a recent trend called GitOps comes in. GitOps is basically a paradigm that helps automate application deployment and infrastructure provisioning. It typically involves using Git, an open source version control system, as a single source of truth for declarative infrastructure and applications. 

GitOps expands on the idea that all the artifacts and code for a given system should be stored in a version control system such as Git (hence the name). Most (all one can hope) developers already do this for code, but GitOps extends this to infrastructure definitions as well, making the source code repository into a complete "source of truth" for your application. Thanks to the fact that many applications are now deployed as container images on an orchestration environment such as Kubernetes, the entire integration and deployment process can now be defined in "infrastructure-as-code" (IaC) configurations and managed as part of a source control’s  "pull request / merge" processes.

There are many tools that follow the GitOps paradigm, of which one can find GitLab CI and Github Actions as part of these famous source control platforms.

Where traditional CI/CD start to fail

As we've mentioned, a complete CI/CD implementation is comprised of two separate phases:

  1. The CI phase, which performs application software builds and automated source code checks, runs tests, creates the container images and so on.
  2. The CD phase, which actually deploys the application's container images and implements any required infrastructure deployments or changes.

The CI phase is usually executed many times a day by numerous parts of the R&D organization since it usually automates a lot of developer-level unit-tests (alongside more complex system-wide scenarios) and developers rely on its automation and repeatability to release code safely into the “main” internal branch of the company.
Being such an integral component in the development cycle, CI is becoming more and more important. This is where two characteristics of traditional CI start to pinpoint its inability to match the needs of modern development teams.

One of the main things that traditional CI suffers from is that it doesn’t “speak the same language” as the other parts in the development process. A modern team working in an environment like Kubernetes is used to creating and managing Kubernetes Jobs and regular deployments, while in their CI tool they need to define proprietary Pipelines and Workflows. A team used to maneuvering Containers, Pods and CronJobs is now forced to master the native tongue of Action, Runners and Workflows.

The other main issue behind traditional CI methods is their ability to scale. Their centralized and isolated approach makes the CI process run its pipeline and workflows on dedicated machines. It could mean setting up more Nodes (the "machines" on which build agents run) in Jenkins or configuring more and more GitLab runners to execute more tasks in parallel in GitLab CI, but it all means one thing - you have to “learn” how to do that.

For a team using Kubernetes, that’s a little odd. They’re used to the idea of scale being abstracted away from them. They’ve also spent time knowing the internal of Kuberenetes well enough to manage their deployments. Learning a whole different set of terms and primitives to control and operate their CI may not make much sense.

Kubernetes as a "cloud-native CI/CD"

This is exactly where modern CI/CD solutions come in. These solutions basically say - “you’re in Kubernetes? Stay there. We’ll come over”. Kubernetes is treated as the live organism it is. It can run your production deployments, but it can also host your CI pipelines - and all using the same amazing primitives it can offer.

Any CI/CD system is basically a job execution engine. If we look at Jenkins for example (as a representative of traditional CI solutions), the Jenkins Server is scheduling the jobs, over nodes which are basically any server that has a Jenkins agent installed. The Jenkins agents themselves are the job executors, taking care of the actual execution of the pipeline in question. Kubernetes itself can be considered a job execution system - it has a scheduler that knows how to take care of placing pods (which we can consider as job executors) on its nodes which host the jobs.

Clearly there is a way to go for a simple job execution engine to become a fully-featured CI/CD system. 

For example, any modern CI/CD system has a concept of a Pipeline - a series of jobs or stages, that perform different tasks like building the application code, running its related tests and finally deploying it to the relevant environment. CI/CD systems also quite often have a user interface and know how to integrate nicely with source control platforms like GitLab, Github or BitBucket. 

Seems far right? Actually, no. The solutions for turning Kubernetes from the job execution engine it is to a world-class CI/CD system are already here among us, and boy they are amazing. There’s a lot of modern solutions out there worth knowing including ArgoCD, JenkinsX, Spinnaker, Kaniko, Flux and the Helm Kubernetes Operator, but we’ll cover them all through the lens of one that you should definitely consider - Tekton.

Tekton is a framework for building CI and CD implementations, and it runs entirely on Kubernetes.  While traditional CI/CD solutions execute pipelines and steps centrally, or orchestrate jobs running in other nodes, Tekton is serverless and distributed, and there is no central dependency for execution.
Tekton adopts a 'container-first' approach, where every step is executed as a container running in a pod (equivalent to nodes in Jenkins).

Tekton is considered a  “Kubernetes Native” CI/CD system - this means that, instead of having its own scheduler and primitives, it leverages Kubernetes as much as it can and it requires a Kubernetes cluster to be deployed - there is no Tekton without Kubernetes. If you’re a Kubernetes user there’s also no production without Kubernetes so you’d appreciate this tight bond.

It’s hard to fully understand Tekton without some basic knowledge of Kubernetes. Tekton is Kubernetes-native by nature making the most of all the Kuberenetes primitives like resources, pods, and control mechanisms to add that extra layer that can turn Kuberenetes from a simple job execution engine into a real CI/CD tool.

One of the fundamental primitives Tekton adds are Tekton Pipelines. To chain multiple stages of CI together, one needs to create a Pipeline - a primitive that takes care of running these stages in order and reporting on any failure - pointing to the stage to investigate.  Tekton Pipelines defines a series of ordered tasks where a task in a Pipeline can use the output of a previously executed task as its input.

Tasks are useful for simpler workloads such as running a test and are executed as a single Kubernetes Pod, usually with a separate persistent storage which generally keeps things simple. Pipelines however can capture complex workloads, such as system-wide dependencies. 

To create the concept of a Pipeline,  Tekton defines a set of Kubernetes Custom Resources that act as building blocks from which you can assemble CI/CD pipelines. Once installed, Tekton Pipelines become available via the Kubernetes CLI (kubectl) exposing entities like PipelineRun that instantiates a Pipeline for execution with specific inputs, outputs, and execution parameters.

Tekton's customizability can make it a bit challenging to implement, but there's an active community behind it (including a great list of Tekton friends!) and guidance is available for running Tekton on public clouds such as Google Cloud and AWS. Martin Heinz, a software developer and DevOps engineer from IBM, wrote a detailed article explaining his process for implementing Tekton, and Baptiste Collard created this post to describe his experiences implementing a more complete CI/CD system.

The next evolution of CI/CD is here

Kubernetes has changed the application development game in a lot of ways, and the most recent evolution of CI/CD is a prime example. Leveraging CI/CD based on Kubernetes gives organizations improved control over their deployment processes, including better security, better and more efficient management of joint application and deployment configuration information, and faster recovery and replication of runtime environments. As with any evolving process there are bits that are still not quite perfect, but next you think of your CI/CD setup - ask yourself are you doing it cloud-native style.

If you’ve ever had the opportunity to be behind the scenes of a real live show, you know that a general rehearsal is a step no director would pass on. There’s no way to recreate that feeling of a real-life 500-seat theater without actually running the act in full, before the anxious crowd enters.

Sign up for Updates

Keep up with all things cloud-native observability.

We care about data. Check out our privacy policy.

We care about data. Check out our privacy policy.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.