Client: Company providing payment software solutions
Duration: 2018 - ongoing

Project of ITGix

More about ITGix


The Challenge
First, we had to get every piece of information related to the hybrid-cloud
Kubernetes cluster. This was a challenge because imagine the security approvals
required to be granted, in really short period of time, to provide such information to
a third party company. Nevertheless, we had built a really good trust between us
and the software provider company, which made thing smoother.
Next challenge was to catch up in time, with building the Kubernetes cluster, where
we would spin off new integration test environments. We suggested running
everything in AWS, where we have excellent knowledge base in running such cluster

The Solution
Kubernetes initialization
Of course, the goal here was to have a fail-proof environment, that does not need
constant maintenance and works without hiccups. In order to achieve this we
utilized the strengths of Kubernetes, Jenkins and AWS services. Starting off with the
Linux distribution, we built up our own AMI, based on the client’s security and
versioning requirements. This gave us the ability to spawn a production ready
Kubernetes cluster environment running in its own VPC with private subnets,
domain and ELBs.

Jenkins server
Jenkins is our product of choice when it comes to automating deployment processes
and streamlining them into pipelines. It handles all of the steps necessary for the
environment to be created, tested and then torn down with no need for human

HELM utilization
Having at our disposal made everything much easier. First we
utilized its power with bringing up the Kubernetes Infrastructure components such
• Prometheus;
• Grafana;
• Alert manager;

• Elasticsearch;
• FluentD;
• Kibana;
• Ingress controller;
• Quay-enterprise (Docker registry);
Then we used helm to template the integration test environment and later on to
customize it as we want using Pipelines. Having every chart stored in git repo
brings even more flexibility and control on the expected outcome.

Pipeline Concept
We chose to use a chain of Jenkins jobs as automation approach. Instead of having
separate jobs for deploying each application we had a single “deploy” job that
would take variables as input from the primary wrapper job and work for each
application. One of the many challenges was to make all jobs and environments
able to run in parallel, so that the only restriction to parallel testing would be the
actual Kubernetes resources that are available. All jobs had to be designed with
scalability in mind. In the end we had two available approaches that would allow us
to run multiple in integration tests in parallel – via Jenkins URL call or through a
ECR scanner job that only allows one integration test to be running at a time. We
used helm as our Kubernetes provisioning tool. It carries a nice versioning and
easily manageable in a repository.

In the end we had a kubernetes environment with a jenkins configured to deploy
anything that was necessary automatically, run tests and manage the docker
images on its own. All of the configuration for it was persisted in docker images or
git repositories and could be easily deployed on any private or public cloud in a very
short amount of time and work out of the box.