DevOps

Git-auto-commit Action

posted Sep 20, 2020, 2:54 PM by Chris G   [ updated Sep 20, 2020, 2:54 PM ]

GitHub Action

Git-auto-commit Action

This GitHub Action automatically commits files which have been changed during a Workflow run and pushes the commit back to GitHub.
The default committer is "GitHub Actions actions@github.com", and the default author of the commit is "Your GitHub Username github_username@users.noreply.github.com".

This Action has been inspired and adapted from the auto-commit-Action of the Canadian Digital Service and this commit-Action by Eric Johnson.



https://github.com/marketplace/actions/git-auto-commit

Using SOPS and git hooks to share secrets

posted Aug 9, 2020, 10:17 AM by Chris G   [ updated Aug 9, 2020, 10:17 AM ]

DevOps drives everything into code (including secrets)

DevOps is a doctrine, not a framework. If you ask 10 peoples what is DevOps, you will get 10 different answers. But among those answers, Automation and Infrastructure as code would somewhat be part of them. Thanks to the tools available, we can now hand off those infrastructure configs and manual deployment commands to the computer and share it with everyone. However, what should we do with our secrets, like access key and password? Should we share them with our team? Where should we put them?


https://levelup.gitconnected.com/using-sops-and-git-hook-to-share-secrets-part-1-d1d4475a4b46

The Modern DevOps Manifesto

posted Jun 1, 2020, 5:39 PM by Chris G   [ updated Jun 1, 2020, 5:40 PM ]

The Modern DevOps Manifesto

What hasn’t changed in DevOps:

  • Increasing velocity and quality of software delivery
  • Bringing Development and Operations together
  • Eliminating the “throw it over the fence” behavior
  • Being accountable for a product from ideation through “Day 2” management
  • Compatibility with Design Thinking, Agile and Lean

What has changed in DevOps:

  • Expanding the stakeholders to include more than Development and Operations (think, Security, Auditors, Infrastructure Engineering) and thus expanding DevOps to infrastructure and enterprise assets
  • The Cloud. While the cloud is older than DevOps — companies are accelerating cloud adoption, strangling monoliths, and recasting delivery for cloud-native applications
  • The emergence and maturation of containers and Kubernetes, standardize the “Cloud Operating System” and allows greater portability across providers and on-premises
  • Automate everything. Have a developer mindset towards test, production deployment, operations and avoid all manual tasks

Therefore, a “Modern DevOps Manifesto” should be considered when starting or re-invigorating DevOps for your enterprise. There are elements of what we already know, however they are force multiplied with the maturation of cloud-native.


The Modern DevOps Manifesto

  1. Everything is code — Infrastructure, configuration, actions, and changes to production — can all be code. When everything is code, everything needs DevOps.
  2. Establish “trusted” resources — Enterprise assets such as images, templates, policies, manifests, configurations that codify standards should be governed (with a pipeline)
  3. Lean into Least Privilege — New roles are emerging: Cluster Engineer, Image Engineer, Site Reliability Engineer…define roles with just enough access to the “trusted” resources they access to get their job done, mitigating risk, limiting exposure.
  4. Everything is observable — Lay the foundation for AI for Pipelines by collecting and organizing data from an instrumented pipeline.
  5. Expand your definition of “everything” — DevOps is not just for application code. DevOps can apply to machine learning model (MLOps or ModelOps), integration (API lifecycle), infrastructure and configuration (GitOps), and other domains. Expand your stakeholders to include Security and auditors…the next evolution of breaking down silos.

Everything is code

Code is the blueprint for applications. Source code is stored in a repo and has a pipeline that transforms and lands source code in its runtime environment. With the advent of cloud, containers, and k8s adoption, configurations for applications, clusters, service bindings, networks, are also being expressed as code (i.e. YAML). Configurations applied through a CLI are a first-class citizen. Known as GitOps, we can now bring benefits of pipelines, governance, tools, and automation to operations and this new class of “code.” Welcome to the next step in Infrastructure as Code. When everything becomes code, everything can have its own pipeline, bringing multi-speed IT to a whole new level. A pipeline for applications, a pipeline for application configuration, a pipeline for cluster configuration, a pipeline for images, a pipeline for lib dependencies. Each pipeline has its own speed, and they are all decoupled from each other. View the world in pipelines!

Establish Trusted Resources

There are enterprise resources that are used to assemble cloud applications. The heritage assets of the past (VM images, buildpacks, middleware releases, lib dependencies) are evolving to images, cluster configurations, and policy definitions that are shared across multiple projects. These enterprise assets should have their own lifecycle, pipeline, governance, and deployment lifecycle. These assets should be trusted and be easily consumed. A trusted asset should be managed in a repo with a clearly defined set of pipeline activities that harden, secure, and verify according to enterprise standards and regulatory compliance. A trusted asset should have a status that indicates it can be safely consumed. Once an asset is awarded trusted status (by making it through a pipeline), it should be published for consumption (this could be as simple as tagging an image in a registry). Trusted assets should be actively maintained and governed.

Lean into Least Privilege

The Principle of Least Privilege (PoLP) states that systems, processes, and users only have access to resources that are necessary for completing their tasks. With everything as code and trusted assets identified, new roles and responsibilities start to emerge. An image could be considered a trusted asset, it is sourced from a Dockerfile (managed in a source code repo). That Dockerfile goes through an automated pipeline that builds an image and executes rigorous scanning and testing that ultimately pushes and tags an image as “trusted” in an enterprise private container registry. The role of an Image Engineer might emerge as a persona that creates, curates and manages the Dockerfiles that are fed into the image pipeline. Only Image Engineers would need to have ”push” authority to the repo where Dockerfiles are managed. If Separation of Duties is a concern, the role of Image Engineer may be restricted to those who are not in the role of Developer, to mitigate the risk of having 1 person have too much influence over a runtime container. New personas can be defined for Cluster Engineers, Site Reliability Engineers, and so on, each with a clearly defined set of responsibilities and privileges.

Everything is Observable

The mechanics of getting an idea to a running feature in production can be a long-running process. There are significant pipeline events to be collected for the express purpose of building pipeline metrics, calculating delivery measurements, correlating pipeline events to operational events, and establishing a forensic feature lifeline for auditors and IT security. Pipelines should be instrumented with event collection and organization. An event data lake in which analytics and machine models could be built and tied together with problem, incident, and change management data on ”Day 2.” The IBM AI Ladder starts with Collect and Organize, eventually leading to Analyze and Infuse of cognitive capabilities, in this case, AI for pipelines. Predicting the quality of a digital product before it exits the pipeline can preserve digital reputation and improve consumer satisfaction.

Expand the definition of “Everything”

Yesteryear, “everything” meant application code and database scripts. Those that advanced on the maturity curve would include test cases, monitoring scripts, infrastructure scripts for common tasks, and put them under source code control. Now amp it up. Machine Learning models, APIs, and even pipelines themselves are code. You will hear terms like ModelOps, API lifecycle management, or the pipeline (PipeOps?), but don’t get distracted. That is just the steady march of progress and wanting to bring increased velocity and quality to other parts of the IT ecosystem. DevOps for all!


The Modern DevOps Manifesto is a combination of the heritage and modern state of delivery today. There will be more changes for DevOps, time does not standstill. We are seeing an emergence and maturation of AI, machine learning, edge, and quantum. There will be permutations for these domains that will continue to mature and emerge.

How will your enterprise adopt the Modern DevOps Manifesto?
These are exactly the kind of problems we tackle with clients in the IBM Garage, where DevOps is a fundamental part of how we bring business value to life. Schedule a no-charge visit with the IBM Garage to see how you can co-create with us. Do these ideas and concepts around DevOps resonate with you and your firm’s transformation? Let me know with your comments below.


Protect the Docker daemon socket

posted Aug 28, 2019, 7:25 AM by Chris G   [ updated Aug 28, 2019, 7:46 AM ]

Protect the Docker daemon socket


By default, Docker runs through a non-networked UNIX socket. It can also optionally communicate using an HTTP socket.

If you need Docker to be reachable through the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker’s tlscacert flag to a trusted CA certificate.

In the daemon mode, it only allows connections from clients authenticated by a certificate signed by that CA. In the client mode, it only connects to servers with a certificate signed by that CA.

Advanced topic

Using TLS and managing a CA is an advanced topic. Please familiarize yourself with OpenSSL, x509, and TLS before using it in production.

Create a CA, server and client keys with OpenSSL

Note: Replace all instances of $HOST in the following example with the DNS name of your Docker daemon’s host.

First, on the Docker daemon’s host machine, generate CA private and public keys:

$ openssl genrsa -aes256 -out ca-key.pem 4096
Generating RSA private key, 4096 bit long modulus
............................................................................................................................................................................................++
........++
e is 65537 (0x10001)
Enter pass phrase for ca-key.pem:
Verifying - Enter pass phrase for ca-key.pem:

$ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem
Enter pass phrase for ca-key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:Queensland
Locality Name (eg, city) []:Brisbane
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Docker Inc
Organizational Unit Name (eg, section) []:Sales
Common Name (e.g. server FQDN or YOUR name) []:$HOST
Email Address []:Sven@home.org.au

Now that you have a CA, you can create a server key and certificate signing request (CSR). Make sure that “Common Name” matches the hostname you use to connect to Docker:

Note: Replace all instances of $HOST in the following example with the DNS name of your Docker daemon’s host.

$ openssl genrsa -out server-key.pem 4096
Generating RSA private key, 4096 bit long modulus
.....................................................................++
.................................................................................................++
e is 65537 (0x10001)

$ openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr

Next, we’re going to sign the public key with our CA:

Since TLS connections can be made through IP address as well as DNS name, the IP addresses need to be specified when creating the certificate. For example, to allow connections using 10.10.10.20 and 127.0.0.1:

$ echo subjectAltName = DNS:$HOST,IP:10.10.10.20,IP:127.0.0.1 >> extfile.cnf

Set the Docker daemon key’s extended usage attributes to be used only for server authentication:

$ echo extendedKeyUsage = serverAuth >> extfile.cnf

Now, generate the signed certificate:

$ openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
  -CAcreateserial -out server-cert.pem -extfile extfile.cnf
Signature ok
subject=/CN=your.host.com
Getting CA Private Key
Enter pass phrase for ca-key.pem:

Authorization plugins offer more fine-grained control to supplement authentication from mutual TLS. In addition to other information described in the above document, authorization plugins running on a Docker daemon receive the certificate information for connecting Docker clients.

For client authentication, create a client key and certificate signing request:

Note: For simplicity of the next couple of steps, you may perform this step on the Docker daemon’s host machine as well.

$ openssl genrsa -out key.pem 4096
Generating RSA private key, 4096 bit long modulus
.........................................................++
................++
e is 65537 (0x10001)

$ openssl req -subj '/CN=client' -new -key key.pem -out client.csr

To make the key suitable for client authentication, create a new extensions config file:

$ echo extendedKeyUsage = clientAuth > extfile-client.cnf

Now, generate the signed certificate:

$ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
  -CAcreateserial -out cert.pem -extfile extfile-client.cnf
Signature ok
subject=/CN=client
Getting CA Private Key
Enter pass phrase for ca-key.pem:

After generating cert.pem and server-cert.pem you can safely remove the two certificate signing requests and extensions config files:

$ rm -v client.csr server.csr extfile.cnf extfile-client.cnf

With a default umask of 022, your secret keys are world-readable and writable for you and your group.

To protect your keys from accidental damage, remove their write permissions. To make them only readable by you, change file modes as follows:

$ chmod -v 0400 ca-key.pem key.pem server-key.pem

Certificates can be world-readable, but you might want to remove write access to prevent accidental damage:

$ chmod -v 0444 ca.pem server-cert.pem cert.pem

Now you can make the Docker daemon only accept connections from clients providing a certificate trusted by your CA:

$ dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem \
  -H=0.0.0.0:2376

To connect to Docker and validate its certificate, provide your client keys, certificates and trusted CA:

Run it on the client machine

This step should be run on your Docker client machine. As such, you need to copy your CA certificate, your server certificate, and your client certificate to that machine.

Note: Replace all instances of $HOST in the following example with the DNS name of your Docker daemon’s host.

$ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
  -H=$HOST:2376 version

Note: Docker over TLS should run on TCP port 2376.

Warning: As shown in the example above, you don’t need to run the docker client with sudo or the docker group when you use certificate authentication. That means anyone with the keys can give any instructions to your Docker daemon, giving them root access to the machine hosting the daemon. Guard these keys as you would a root password!

from https://docs.docker.com/engine/security/https/



Securing a shared Docker socket using a Golang reverse-proxy (1/4)







Tyk API Gateway

Tyk is a lightweight, open-source API Gateway and Management Platform enables you to control who accesses your API, when they access it and how they access it. Tyk will also record detailed analytics on how your users are interacting with your API and when things go wrong.

What is an API Gateway?

An API Gateway sits in front of your application(s) and manages the heavy lifting of authorization, access control and throughput limiting to your services. Ideally, it should mean that you can focus on creating services instead of implementing management infrastructure. For example, if you have written a really awesome web service that provides geolocation data for all the cats in NYC, and you want to make it public, integrating an API gateway is a faster, more secure route than writing your own authorization middleware.

Key Features of Tyk

Tyk offers powerful, yet lightweight features that allow fine-grained control over your API ecosystem.

  • RESTFul API - Full programmatic access to the internals makes it easy to manage your API users, keys and Api Configuration from within your systems
  • Multiple access protocols - Out of the box, Tyk supports Token-based, HMAC Signed, Basic Auth and Keyless access methods
  • Rate Limiting - Easily rate limit your API users, rate limiting is granular and can be applied on a per-key basis
  • Quotas - Enforce usage quotas on users to manage capacity or charge for tiered access
  • Granular Access Control - Grant API access on a version by version basis, grant keys access to multiple API's or just a single version
  • Key Expiry - Control how long keys are valid for
  • API Versioning - API Versions can be easily set and deprecated at a specific time and date
  • Blacklist/Whitelist/Ignored endpoint access - Enforce strict security models on a version-by-version basis to your access points
  • Analytics logging - Record detailed usage data on who is using your API's (raw data only)
  • Webhooks - Trigger webhooks against events such as Quota Violations and Authentication failures
  • IP Whitelisting - Block access to non-trusted IP addresses for more secure interactions
  • Zero downtime restarts - Tyk configurations can be altered dynamically and the service restarted without affecting any active request

Tyk is written in Go, which makes it fast and easy to set up. Its only dependencies are a Mongo database (for analytics) and Redis, though it can be deployed without either (not recommended).







Docker Socket Proxy

   

What?

This is a security-enhanced proxy for the Docker Socket.

Why?

Giving access to your Docker socket could mean giving root access to your host, or even to your whole swarm, but some services require hooking into that socket to react to events, etc. Using this proxy lets you block anything you consider those services should not do.

How?

We use the official Alpine-based HAProxy image with a small configuration file.

It blocks access to the Docker socket API according to the environment variables you set. It returns a HTTP 403 Forbidden status for those dangerous requests that should never happen.





Docker Hardening Standard

posted Aug 28, 2019, 7:20 AM by Chris G   [ updated Aug 28, 2019, 7:21 AM ]


Docker Hardening Standard

✅ The Center for Internet Security (CIS) puts out documents detailing security best-practices, recommendations, and actionable steps to achieve a hardened baseline. The best part: they're free.

✅ Better yet, docker-bench-security is an automated checker based on the CIS benchmarks.



# recommended
$ docker run \
    -it \
    --net host \
    --pid host \
    --userns host \
    --cap-add audit_control \
    -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
    -v /var/lib:/var/lib \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /usr/lib/systemd:/usr/lib/systemd \
    -v /etc:/etc --label docker_bench_security \
    docker/docker-bench-security
from:

Traefik Docker Container Routing

posted Mar 25, 2019, 7:16 AM by Chris G   [ updated Mar 25, 2019, 7:17 AM ]

https://traefik.io/

The Cloud Native Edge Router

A reverse proxy/load balancer that's easy, dynamic, automatic, fast, full-featured, open source, production-proven, provides metrics and integrates with every major cluster technology... No wonder it's so popular!


Integrates easily with Portainer to dynamically create a URL to deployed Docker containers when they are deployed.

Portainer Docker management

posted Mar 25, 2019, 7:12 AM by Chris G   [ updated Mar 25, 2019, 7:13 AM ]

Portainer


MAKING DOCKER MANAGEMENT EASY.

Build and manage your Docker environments with ease today. Provides a simple and lightweight web UI and a set of API services for deploying and managing Docker containers.

Spinnaker - open source multi-cloud continuous delivery platform

posted May 12, 2018, 9:00 AM by Chris G   [ updated May 12, 2018, 9:02 AM ]

Continuous Delivery for Enterprise

Fast, safe, repeatable deployments

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

Created at Netflix, it has been battle-tested in production by hundreds of teams over millions of deployments. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers.


Multi-Cloud

Deploy across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and Openstack, with Oracle Bare Metal and DC/OS coming soon.


Ceph Distributed Storage

posted Jan 16, 2018, 4:30 PM by Chris G   [ updated Jan 16, 2018, 4:31 PM ]

Ceph is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely.

Goss - Quick and Easy server validation

posted Nov 17, 2017, 7:46 AM by Chris G   [ updated Nov 17, 2017, 7:47 AM ]

Goss - Quick and Easy server validation


What is Goss?

Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It eases the process of writing tests by allowing the user to generate tests from the current system state. Once the test suite is written they can be executed, waited-on, or served as a health endpoint.

Why use Goss?

  • Goss is EASY! - Goss in 45 seconds
  • Goss is FAST! - small-medium test suits are near instantaneous, see benchmarks
  • Goss is SMALL! - <10MB single self-contained binary

Generated goss.yaml:

$ cat goss.yaml
port:
  tcp:22:
    listening: true
    ip:
    - 0.0.0.0
  tcp6:22:
    listening: true
    ip:
    - '::'
service:
  sshd:
    enabled: true
    running: true
user:
  sshd:
    exists: true
    uid: 74
    gid: 74
    groups:
    - sshd
    home: /var/empty/sshd
    shell: /sbin/nologin
group:
  sshd:
    exists: true
    gid: 74
process:
  sshd:
    running: true

Now that we have a test suite, we can:

  • Run it once
goss validate
...............

Total Duration: 0.021s # <- yeah, it's that fast..
Count: 15, Failed: 0

    1-10 of 23