Continuous Integration and Continuous Deployment in the World of DevOps Methodology

As we all know in this era of technology, manual tasks are gradually becoming obsolete.

Everyone in the industry expects that processes, phases of development and deployments are expedited. The achievement of CI/CD (Continuous Integration and Continuous Deployment) plays an astounding role in automating every phase of the software development lifecycle.


The basic phases for CI/CD are CodeCommit, Build, Test and Deploy. With the help of a CI/CD pipeline, the process of delivering the product to a customer has become faster and less risky. Here I will discuss various tools involved in each phase of the CI/CD pipeline.


Let’s consider a scenario to explain CI/CD in a better way. Imagine that you are going to build a web application that is going to be deployed on live web servers. For this project, you need developers who will write code to build this application. Each day after writing code for this application, it has to be stored somewhere. There are various tools for committing code like git, and svn and when it tracks history it is known as a version control system.


Introduction to tools used in phases of CI/CD Pipeline

We use various Continuous integration tools like Jenkins, Bamboo and many others that are available in the market. As Jenkins is an open-source tool, it is broadly used and has a large number of built-in plugins that are used to integrate with other tools. A few important ones are Git plugins, Maven, Ant, Docker, etc.


CI tools check for any new CodeCommits in a given interval of time we can be preset or can be set in such a way that once any code is committed, it will pull it from the version control tool and download into the workspace for further process. It then automatically builds and tests every change that is committed. We can define the entire job or the task in Jenkins.


Next, it goes through the build phase where tools like Maven, Gradle, etc can be used. In this phase, the code is tested, compiled and packaged in war/jar format by the build tool. J-unit generates the test report.


Once the build phase is over, then you move on to the testing phase. In this phase, various additional testing tools like selenium and sonarqube can be involved and integrated with a CI tool like Jenkins for code quality test, code coverage, integration testing, and user acceptance testing.


Next, we can move to the deploy phase in the pipeline which includes deploying the application in a staging server or production server as per the company needs. There are various ways to deploy, like deploying the application in Docker Container, Simple deployment to a webserver EC2, deploy using AWS elastic Beanstalk, or AWS EC2 Container Service (ECS) for cluster-based applications. There are many more ways to do this. Even tools like chef, puppet, or ansible can also be used as a deployment tool.


Once the application is deployed and live in the production server we can go ahead and do continuous monitoring. There are many monitoring tools like Nagios, Splunk, AWS Cloudwatch, etc. which can be integrated with the CI tool. Some important reasons to use a monitoring tool are that it resolves any system errors (low memory, unreachable server, etc.) before they have any negative impact on your business productivity. It can be used to automatically fix problems when they are detected and can also send an alert to the specific team or individual for any issue that occurred.


Additionally, the pipeline can be configured so that if there are any errors in any phase, the CI tool can define the task or pipeline as a code. This code can then be emailed back to the development team or testing team, alerting them to which phase has failed so that they can fix it. Then they will push it into the version control system and goes back into the pipeline.

The diagram below shows a summary of CI/CD pipeline:

Posted in ,

Durba Banik

Leave a Comment


Capital One and EC2 Hack – an Overview

By Nate Aiman-Smith | August 5, 2019 |

There’s been a ton of coverage of the recently discovered Capital One breach. I’m generally very skeptical when AWS security makes the news; so far, most “breaches” have been a result of the customer implementing AWS services in an insecure manner, usually by allowing unrestricted internet access and often overriding defaults to remove safeguards (I’m looking at you, NICE and Accenture and Dow Jones!).  Occasionally, a discovered “AWS vulnerability” impacts a large number of applications in AWS – and it also impacts any similarly-configured applications that are *not* in AWS (see, for example, this PR piece…um,…

Read More