• Skip to primary navigation
  • Skip to main content
RunAsCloud

RunAsCloud

Advanced Cloud Consultants

  • Services
    • Security and Compliance
    • DevOps
    • IT Governance
    • Data and Analytics
    • Cloud Architecture Review
    • BAA Readiness Assessment
    • Security Tune-Up
  • Blog
  • Our Team
  • Careers
  • FAQs
    • Frequently Asked Questions – Customers
    • Frequently Asked Questions – Candidates
  • Contact Us
  • Show Search
Hide Search

DevOps

What Really is DevOps?

October 17, 2019 By Jason Silva

When I started my career as a Systems Administrator, I thought that I would be doing that for my whole career. A few years later, I thought to myself, ”If I were to progress, what would be my next step?” After a little bit of research, I learned of a position called a DevOps Engineer. Since I had been learning how to code on my off time, I thought that this would be a perfect next step for me in my career. After doing a lot more research into DevOps, I came to realize that there was a lot more to the position than I previously understood.

So, what exactly is DevOps?

  • Is it a new and improved Systems Administrator position for the Cloud era?
  • Is it a culture in which Developers and Operation Engineers work in unison with shared responsibilities?
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  • Is it a combination of the two?

The answer is not as concrete as one may think and is going to differ from person to person and from company to company. Some companies will say that Dev/Ops engineers are just Operations engineers. If you Google “What is DevOps?”, there are a lot of companies that will give you their definition and for the most part, they all sound similar. For example, here is the definition of DevOps from AWS:

“DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.”

The term DevOps doesn’t necessarily mean that it pertains to Operations Engineers. The term also doesn’t mean “Strictly Infrastructure Automation”, although it is a big part. I believe DevOps is a culture and position that bridges the gap between Developers and Operations Engineers. Much like test-driven development, Operations metrics/tasks should be moved over to the left of the development cycle.

A byproduct of that is the application getting shipped quicker and everyone taking responsibility instead of trying to pass it elsewhere. Another byproduct is the better audibility of the infrastructure.  A few examples of this would be; Pull Requests on Terraform / Cloudformation / ARM templates, developers being a part of the operations on-call rotations, developers being embedded on an operations team or vice versa. Another case where Operation Engineers are embedded into feature teams can be found at Spotify.

Ultimately, DevOps is not just one thing. It brings a whole “hodgepodge” of processes and tooling together to make a smoother and more enjoyable Software Development Lifecycle.

Pritunl Zero

September 22, 2019 By Jake Berkowsky

Pritunl is an open source OpenVPN and IPSec solution that comes with a somewhat popular VPN client. Pritunl Zero fills in a few more gaps by providing zero trust access to SSH and Web Services similar to products such as Akamai EAA and Zscaller.

I installed an individual server using this guide. It was relatively easy although I had to open up a private browsing window to get past an initial HSTS error, and the default credentials mentioned in the documentation were not up to date (the solution is to run pritunl-zero default-password). From there, setting up an internal service to proxy took about 5 minutes. One thing that I’d like to try out is the API for automatic registration of web-services. EAA and ZScaller for some reason still require manual setup.

Zero also offers a way to authenticate for SSH. It uses an SSH Certificate Authority to sign a users public key, the user then uses that key to access other servers. This approach allows for authorization without the need for Zero to ever talk to those servers. I’m a big fan of using SSH Certificate Authorities and have used Hashicorp’s Vault in the past to accomplish it. For network segregation, Zero can automatically create fleets of SSH bastions to route connections to internal resources. Zero provides a CLI tool pritunl-ssh which takes care of the accompanying config on the client side.

All in all, I’m cautiously optimistic. Zero-Trust web-application proxies have long been one of my go to solutions for deploying secure internal applications. Having a solid open source option would be a great resource for companies that want the additional security but don’t want to purchase an enterprise license.

Maturity in DevOps

July 30, 2019 By Jake Berkowsky

As a consultant, I tend to work with a variety of clients and teams all across the product maturity spectrum.

Some are just starting; maybe they have an MVP, maybe they are still building it. Others have existed in their space for years. Typically, when I get called into projects, the product maturity is on one extreme of the spectrum. DevOps maturity, on the other hand, tends to follow a different distribution with most DevOps programs somewhere past just starting, but not quite mature.

 

In this not-quite-mature stage of DevOps, there is usually a sort of CI/CD pipeline, automatic tests, linters and an emphasis on automation. Logs, metrics, and backups are taken automatically and centralized. There may be basic dashboards and saved queries. Documentation may or may not be adequate for helping to debug problems. If there was a DevOps checklist, all the boxes would be checked. At this point, many teams stop building.

 

Perhaps there was never a plan to mature beyond that level. People love to say “premature optimization is the root of all evil”, and they have a point; if one of my clients didn’t have automated backups the first thing I’d do is enable them, at a minimum by taking snapshots on the VM level. If they didn’t have any logging setup at all the first step would be to enable them, maybe push them to centralized storage. Once a team reaches that level of maturity they have a working system. Backups don’t usually need to be restored that often, and a team can get by looking at the logs there only when something needs to be debugged. When the team isn’t getting woken up in the middle of the night there is less incentive to continue maturing.

 

Another reason why DevOps engineers may shy away from working to mature their program is that after a certain level it becomes “less fun”. The rate of building slows down and the focus shifts to maintenance and metrics. Metrics are decidedly Not Cool; they’re what non-tech people and management types use to justify their paycheck. Still, once the product reaches a certain level and developers stop making large amounts of process change, defining metrics and direction will help to continuously improve the program and keep it from getting out of date.

 

In a mature DevOps program, much more time is spent optimizing than implementing. Logs have long been centralized and now most of the effort is spent looking for new indicators that something may be wrong. Linters may have more custom checks than out of the box code formatters and style checks. Mature DevOps is about continuously building processes and responding to change by continuously optimizing and improving. 

 

Mature DevOps programs allow errors to be caught earlier and make them easier to debug. In an early stage DevOps program, an administrator may only get alerted if the entire application is down, but may not notice if a small component is broken. For example, as the program matures, the team may start to track the rough number of “404 page not found” responses returned, alerting when the metric crosses a certain threshold. Later, the team can get even more granular, perhaps reporting when a 404 is returned for a page that had previously not returned one.

 

Mature DevOps helps reduce tech debt and improves efficiency. A basic DevOps program may only track the overall page load time or memory usage. As a result, by the time the many small statistically insignificant changes become noticeable, it has already become infeasible to go back and identify/fix them. If code changes are reviewed on their performance impact to individual components, then adverse changes can be identified earlier in the process, increasing the overall health of the application and extending the amount of time before the inevitable “major refactor”.

 

If DevOps is your thing, and you want the opportunity to build out advanced DevOps programs for different types of clients, RunAsCloud is hiring. Reach out to careers@runascloud.com to learn more! 

Continuous Integration and Continuous Deployment in the World of DevOps Methodology

July 30, 2019 By Durba Banik

As we all know in this era of technology, manual tasks are gradually becoming obsolete.

Everyone in the industry expects that processes, phases of development and deployments are expedited. The achievement of CI/CD (Continuous Integration and Continuous Deployment) plays an astounding role in automating every phase of the software development lifecycle.

 

The basic phases for CI/CD are CodeCommit, Build, Test and Deploy. With the help of a CI/CD pipeline, the process of delivering the product to a customer has become faster and less risky. Here I will discuss various tools involved in each phase of the CI/CD pipeline.

 

Let’s consider a scenario to explain CI/CD in a better way. Imagine that you are going to build a web application that is going to be deployed on live web servers. For this project, you need developers who will write code to build this application. Each day after writing code for this application, it has to be stored somewhere. There are various tools for committing code like git, and svn and when it tracks history it is known as a version control system.

 

Introduction to tools used in phases of CI/CD Pipeline

We use various Continuous integration tools like Jenkins, Bamboo and many others that are available in the market. As Jenkins is an open-source tool, it is broadly used and has a large number of built-in plugins that are used to integrate with other tools. A few important ones are Git plugins, Maven, Ant, Docker, etc.

 

CI tools check for any new CodeCommits in a given interval of time we can be preset or can be set in such a way that once any code is committed, it will pull it from the version control tool and download into the workspace for further process. It then automatically builds and tests every change that is committed. We can define the entire job or the task in Jenkins.

 

Next, it goes through the build phase where tools like Maven, Gradle, etc can be used. In this phase, the code is tested, compiled and packaged in war/jar format by the build tool. J-unit generates the test report.

 

Once the build phase is over, then you move on to the testing phase. In this phase, various additional testing tools like selenium and sonarqube can be involved and integrated with a CI tool like Jenkins for code quality test, code coverage, integration testing, and user acceptance testing.

 

Next, we can move to the deploy phase in the pipeline which includes deploying the application in a staging server or production server as per the company needs. There are various ways to deploy, like deploying the application in Docker Container, Simple deployment to a webserver EC2, deploy using AWS elastic Beanstalk, or AWS EC2 Container Service (ECS) for cluster-based applications. There are many more ways to do this. Even tools like chef, puppet, or ansible can also be used as a deployment tool.

 

Once the application is deployed and live in the production server we can go ahead and do continuous monitoring. There are many monitoring tools like Nagios, Splunk, AWS Cloudwatch, etc. which can be integrated with the CI tool. Some important reasons to use a monitoring tool are that it resolves any system errors (low memory, unreachable server, etc.) before they have any negative impact on your business productivity. It can be used to automatically fix problems when they are detected and can also send an alert to the specific team or individual for any issue that occurred.

 

Additionally, the pipeline can be configured so that if there are any errors in any phase, the CI tool can define the task or pipeline as a code. This code can then be emailed back to the development team or testing team, alerting them to which phase has failed so that they can fix it. Then they will push it into the version control system and goes back into the pipeline.

The diagram below shows a summary of CI/CD pipeline:

RunAsCloud

  • Services
  • Blog
  • Our Team
  • Careers
  • FAQs
  • Contact Us
Close