DevOps
Git-auto-commit Action
This GitHub Action automatically commits files which have been changed during a Workflow run and pushes the commit back to GitHub. This Action has been inspired and adapted from the auto-commit-Action of the Canadian Digital Service and this commit-Action by Eric Johnson. |
Using SOPS and git hooks to share secrets
DevOps is a doctrine, not a framework. If you ask 10 peoples what is DevOps, you will get 10 different answers. But among those answers, Automation and Infrastructure as code would somewhat be part of them. Thanks to the tools available, we can now hand off those infrastructure configs and manual deployment commands to the computer and share it with everyone. However, what should we do with our secrets, like access key and password? Should we share them with our team? Where should we put them? https://levelup.gitconnected.com/using-sops-and-git-hook-to-share-secrets-part-1-d1d4475a4b46 |
The Modern DevOps Manifesto
What hasn’t changed in DevOps:
What has changed in DevOps:
Therefore, a “Modern DevOps Manifesto” should be considered when starting or re-invigorating DevOps for your enterprise. There are elements of what we already know, however they are force multiplied with the maturation of cloud-native. The Modern DevOps Manifesto
Everything is codeCode is the blueprint for applications. Source code is stored in a repo and has a pipeline that transforms and lands source code in its runtime environment. With the advent of cloud, containers, and k8s adoption, configurations for applications, clusters, service bindings, networks, are also being expressed as code (i.e. YAML). Configurations applied through a CLI are a first-class citizen. Known as GitOps, we can now bring benefits of pipelines, governance, tools, and automation to operations and this new class of “code.” Welcome to the next step in Infrastructure as Code. When everything becomes code, everything can have its own pipeline, bringing multi-speed IT to a whole new level. A pipeline for applications, a pipeline for application configuration, a pipeline for cluster configuration, a pipeline for images, a pipeline for lib dependencies. Each pipeline has its own speed, and they are all decoupled from each other. View the world in pipelines! Establish Trusted ResourcesThere are enterprise resources that are used to assemble cloud applications. The heritage assets of the past (VM images, buildpacks, middleware releases, lib dependencies) are evolving to images, cluster configurations, and policy definitions that are shared across multiple projects. These enterprise assets should have their own lifecycle, pipeline, governance, and deployment lifecycle. These assets should be trusted and be easily consumed. A trusted asset should be managed in a repo with a clearly defined set of pipeline activities that harden, secure, and verify according to enterprise standards and regulatory compliance. A trusted asset should have a status that indicates it can be safely consumed. Once an asset is awarded trusted status (by making it through a pipeline), it should be published for consumption (this could be as simple as tagging an image in a registry). Trusted assets should be actively maintained and governed. Lean into Least PrivilegeThe Principle of Least Privilege (PoLP) states that systems, processes, and users only have access to resources that are necessary for completing their tasks. With everything as code and trusted assets identified, new roles and responsibilities start to emerge. An image could be considered a trusted asset, it is sourced from a Dockerfile (managed in a source code repo). That Dockerfile goes through an automated pipeline that builds an image and executes rigorous scanning and testing that ultimately pushes and tags an image as “trusted” in an enterprise private container registry. The role of an Image Engineer might emerge as a persona that creates, curates and manages the Dockerfiles that are fed into the image pipeline. Only Image Engineers would need to have ”push” authority to the repo where Dockerfiles are managed. If Separation of Duties is a concern, the role of Image Engineer may be restricted to those who are not in the role of Developer, to mitigate the risk of having 1 person have too much influence over a runtime container. New personas can be defined for Cluster Engineers, Site Reliability Engineers, and so on, each with a clearly defined set of responsibilities and privileges. Everything is ObservableThe mechanics of getting an idea to a running feature in production can be a long-running process. There are significant pipeline events to be collected for the express purpose of building pipeline metrics, calculating delivery measurements, correlating pipeline events to operational events, and establishing a forensic feature lifeline for auditors and IT security. Pipelines should be instrumented with event collection and organization. An event data lake in which analytics and machine models could be built and tied together with problem, incident, and change management data on ”Day 2.” The IBM AI Ladder starts with Collect and Organize, eventually leading to Analyze and Infuse of cognitive capabilities, in this case, AI for pipelines. Predicting the quality of a digital product before it exits the pipeline can preserve digital reputation and improve consumer satisfaction. Expand the definition of “Everything”Yesteryear, “everything” meant application code and database scripts. Those that advanced on the maturity curve would include test cases, monitoring scripts, infrastructure scripts for common tasks, and put them under source code control. Now amp it up. Machine Learning models, APIs, and even pipelines themselves are code. You will hear terms like ModelOps, API lifecycle management, or the pipeline (PipeOps?), but don’t get distracted. That is just the steady march of progress and wanting to bring increased velocity and quality to other parts of the IT ecosystem. DevOps for all! The Modern DevOps Manifesto is a combination of the heritage and modern state of delivery today. There will be more changes for DevOps, time does not standstill. We are seeing an emergence and maturation of AI, machine learning, edge, and quantum. There will be permutations for these domains that will continue to mature and emerge. How will your enterprise adopt the Modern DevOps Manifesto? |
Protect the Docker daemon socket
Protect the Docker daemon socketBy default, Docker runs through a non-networked UNIX socket. It can also optionally communicate using an HTTP socket. If you need Docker to be reachable through the network in a safe manner, you can enable TLS by specifying the In the daemon mode, it only allows connections from clients authenticated by a certificate signed by that CA. In the client mode, it only connects to servers with a certificate signed by that CA.
Create a CA, server and client keys with OpenSSL
First, on the Docker daemon’s host machine, generate CA private and public keys:
Now that you have a CA, you can create a server key and certificate signing request (CSR). Make sure that “Common Name” matches the hostname you use to connect to Docker:
Next, we’re going to sign the public key with our CA: Since TLS connections can be made through IP address as well as DNS name, the IP addresses need to be specified when creating the certificate. For example, to allow connections using
Set the Docker daemon key’s extended usage attributes to be used only for server authentication:
Now, generate the signed certificate:
Authorization plugins offer more fine-grained control to supplement authentication from mutual TLS. In addition to other information described in the above document, authorization plugins running on a Docker daemon receive the certificate information for connecting Docker clients. For client authentication, create a client key and certificate signing request:
To make the key suitable for client authentication, create a new extensions config file:
Now, generate the signed certificate:
After generating
With a default To protect your keys from accidental damage, remove their write permissions. To make them only readable by you, change file modes as follows:
Certificates can be world-readable, but you might want to remove write access to prevent accidental damage:
Now you can make the Docker daemon only accept connections from clients providing a certificate trusted by your CA:
To connect to Docker and validate its certificate, provide your client keys, certificates and trusted CA:
Tyk API GatewayTyk is a lightweight, open-source API Gateway and Management Platform enables you to control who accesses your API, when they access it and how they access it. Tyk will also record detailed analytics on how your users are interacting with your API and when things go wrong. What is an API Gateway?An API Gateway sits in front of your application(s) and manages the heavy lifting of authorization, access control and throughput limiting to your services. Ideally, it should mean that you can focus on creating services instead of implementing management infrastructure. For example, if you have written a really awesome web service that provides geolocation data for all the cats in NYC, and you want to make it public, integrating an API gateway is a faster, more secure route than writing your own authorization middleware. Key Features of TykTyk offers powerful, yet lightweight features that allow fine-grained control over your API ecosystem.
Tyk is written in Go, which makes it fast and easy to set up. Its only dependencies are a Mongo database (for analytics) and Redis, though it can be deployed without either (not recommended). Docker Socket ProxyWhat?This is a security-enhanced proxy for the Docker Socket. Why?Giving access to your Docker socket could mean giving root access to your host, or even to your whole swarm, but some services require hooking into that socket to react to events, etc. Using this proxy lets you block anything you consider those services should not do. How?We use the official Alpine-based HAProxy image with a small configuration file. It blocks access to the Docker socket API according to the environment variables you set. It returns a |
Docker Hardening Standard
Docker Hardening Standard✅ The Center for Internet Security (CIS) puts out documents detailing security best-practices, recommendations, and actionable steps to achieve a hardened baseline. The best part: they're free. ✅ Better yet, docker-bench-security is an automated checker based on the CIS benchmarks. # recommended $ docker run \ -it \ --net host \ --pid host \ --userns host \ --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /etc:/etc --label docker_bench_security \ docker/docker-bench-security from:
|
Traefik Docker Container Routing
A reverse proxy/load balancer that's easy, dynamic, automatic, fast, full-featured, open source, production-proven, provides metrics and integrates with every major cluster technology... No wonder it's so popular! Integrates easily with Portainer to dynamically create a URL to deployed Docker containers when they are deployed. |
Spinnaker - open source multi-cloud continuous delivery platform
Ceph Distributed Storage
Ceph is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely. |
Goss - Quick and Easy server validation
Goss - Quick and Easy server validationWhat is Goss?Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It eases the process of writing tests by allowing the user to generate tests from the current system state. Once the test suite is written they can be executed, waited-on, or served as a health endpoint. Why use Goss?
Generated $ cat goss.yaml port: tcp:22: listening: true ip: - 0.0.0.0 tcp6:22: listening: true ip: - '::' service: sshd: enabled: true running: true user: sshd: exists: true uid: 74 gid: 74 groups: - sshd home: /var/empty/sshd shell: /sbin/nologin group: sshd: exists: true gid: 74 process: sshd: running: true Now that we have a test suite, we can:
|