What hasn’t changed in DevOps:
What has changed in DevOps:
Therefore, a “Modern DevOps Manifesto” should be considered when starting or re-invigorating DevOps for your enterprise. There are elements of what we already know, however they are force multiplied with the maturation of cloud-native. The Modern DevOps Manifesto
Everything is codeCode is the blueprint for applications. Source code is stored in a repo and has a pipeline that transforms and lands source code in its runtime environment. With the advent of cloud, containers, and k8s adoption, configurations for applications, clusters, service bindings, networks, are also being expressed as code (i.e. YAML). Configurations applied through a CLI are a first-class citizen. Known as GitOps, we can now bring benefits of pipelines, governance, tools, and automation to operations and this new class of “code.” Welcome to the next step in Infrastructure as Code. When everything becomes code, everything can have its own pipeline, bringing multi-speed IT to a whole new level. A pipeline for applications, a pipeline for application configuration, a pipeline for cluster configuration, a pipeline for images, a pipeline for lib dependencies. Each pipeline has its own speed, and they are all decoupled from each other. View the world in pipelines! Establish Trusted ResourcesThere are enterprise resources that are used to assemble cloud applications. The heritage assets of the past (VM images, buildpacks, middleware releases, lib dependencies) are evolving to images, cluster configurations, and policy definitions that are shared across multiple projects. These enterprise assets should have their own lifecycle, pipeline, governance, and deployment lifecycle. These assets should be trusted and be easily consumed. A trusted asset should be managed in a repo with a clearly defined set of pipeline activities that harden, secure, and verify according to enterprise standards and regulatory compliance. A trusted asset should have a status that indicates it can be safely consumed. Once an asset is awarded trusted status (by making it through a pipeline), it should be published for consumption (this could be as simple as tagging an image in a registry). Trusted assets should be actively maintained and governed. Lean into Least PrivilegeThe Principle of Least Privilege (PoLP) states that systems, processes, and users only have access to resources that are necessary for completing their tasks. With everything as code and trusted assets identified, new roles and responsibilities start to emerge. An image could be considered a trusted asset, it is sourced from a Dockerfile (managed in a source code repo). That Dockerfile goes through an automated pipeline that builds an image and executes rigorous scanning and testing that ultimately pushes and tags an image as “trusted” in an enterprise private container registry. The role of an Image Engineer might emerge as a persona that creates, curates and manages the Dockerfiles that are fed into the image pipeline. Only Image Engineers would need to have ”push” authority to the repo where Dockerfiles are managed. If Separation of Duties is a concern, the role of Image Engineer may be restricted to those who are not in the role of Developer, to mitigate the risk of having 1 person have too much influence over a runtime container. New personas can be defined for Cluster Engineers, Site Reliability Engineers, and so on, each with a clearly defined set of responsibilities and privileges. Everything is ObservableThe mechanics of getting an idea to a running feature in production can be a long-running process. There are significant pipeline events to be collected for the express purpose of building pipeline metrics, calculating delivery measurements, correlating pipeline events to operational events, and establishing a forensic feature lifeline for auditors and IT security. Pipelines should be instrumented with event collection and organization. An event data lake in which analytics and machine models could be built and tied together with problem, incident, and change management data on ”Day 2.” The IBM AI Ladder starts with Collect and Organize, eventually leading to Analyze and Infuse of cognitive capabilities, in this case, AI for pipelines. Predicting the quality of a digital product before it exits the pipeline can preserve digital reputation and improve consumer satisfaction. Expand the definition of “Everything”Yesteryear, “everything” meant application code and database scripts. Those that advanced on the maturity curve would include test cases, monitoring scripts, infrastructure scripts for common tasks, and put them under source code control. Now amp it up. Machine Learning models, APIs, and even pipelines themselves are code. You will hear terms like ModelOps, API lifecycle management, or the pipeline (PipeOps?), but don’t get distracted. That is just the steady march of progress and wanting to bring increased velocity and quality to other parts of the IT ecosystem. DevOps for all! The Modern DevOps Manifesto is a combination of the heritage and modern state of delivery today. There will be more changes for DevOps, time does not standstill. We are seeing an emergence and maturation of AI, machine learning, edge, and quantum. There will be permutations for these domains that will continue to mature and emerge. How will your enterprise adopt the Modern DevOps Manifesto? |
DevOps >