DevOps 101 – Part 3: Evolve

DevOps 101 – Part 3: Evolve

Fundamentals of DevOps Blog Series

In this third part of our series on twelve-factor DevOps, we’ll cover evolution and how it applies to your tools, techniques, and processes.  Evolution can be applied at any time, but it’s easiest to do it once you’ve unified your processes.

Factor 3: Evolve

Evolution is defined as “the gradual development of something, especially from a simple to a more complex form”. Just as plants and animals evolve to survive, your processes and tools must evolve for your business to grow. In the context of 12-factor DevOps, this implies steady, relentless improvement akin to the more classically business-oriented Kaizen. Not everything will change at the same time, and many times the changes are incremental, but there should almost always be something changing.

The most successful DevOps implementations constantly monitor and test themselves, adjusting processes or updating tools to increase efficiency and improve functionality.  This is, unfortunately, one of the most contentious and headache-inducing aspects of DevOps.  The constant change means that a process may be replaced with a newer one after a single implementation.  More than with any other of the twelve factors, evolution risks creating a never-ending stream of technical debt that will have to be paid down.

Evolution must achieve one or more of the following:

  • Reduce time to delivery
  • Reduce failures
  • Add or enhance functionality
  • Improve maintainability

Changes do not need to be massive to improve the overall process.  Adding failure notifications to alert the correct teams or individuals at each step of a process is a small change, as is implementing a shared library for the execution of a step shared across numerous processes.  Both are positive changes.  The former reduces delivery time by ensuring the responsible team knows about the failure as soon as possible.  The later makes all of the processes easier to maintain going forward.

Suppose you use “Foo” to compile, test, and deploy your code. While reading a blog post you discover “Bar”. Bar is fantastic at compiling and testing, and offers features you want which Foo doesn’t have, but it’s terrible at deploying. In that case it’s fine to use “Bar” to compile and test, and “Foo” to deploy even though your process is diverging, because it’s simultaneously improving.

Neutral changes are acceptable, but you should avoid regressions at all costs. Remember that the goal is to deliver value to your customers as efficiently as possible. Anything that slows the overall delivery down should be critically evaluated to ensure it delivers value in and of itself.

For example, inserting a vulnerability scan into your build process can add minutes to the pipeline’s execution time. If you’re trying to build and deploy as frequently as possible, this may feel counter-intuitive. The benefits of the scan, however, catching vulnerabilities, highlighting potential exploits, and ensuring that as secure of a product as possible is delivered, outweigh the additional time required.

At the same time, you should avoid enacting change for its own sake. Switching from one tool or process to another must have a tangible, if not direct, benefit, as must updating a tool to a newer version. When looking for places to improve, weigh the level of effort against the payoff.

William Kokolis

Share on Social Media

Related Posts

Uncategorized

Digital Modernization or the Cost of Inaction

Blog

Transform Your Customer-Facing Contact Center in 10 Business Days

Financial Services

Transform Your Customer-Facing Contact Center in 10 Business Days

Blog

Terazo Takes LaunchDarkly Users Further, Faster