System Tools


AWS cooks up Extensions API for Lambda serverless platform: Useful for monitoring, alerting

posted Oct 17, 2020, 7:55 PM by Chris G   [ updated Oct 17, 2020, 7:56 PM ]

Cloud computing behemoth Amazon Web Services, has pushed out an Extensions API for its Lambda serverless platform that lets developers write custom code to handle lifecycle events – such as when the environment starts, invokes functions and shuts down.

AWS Lambda runs functions on demand. It works by firing up an execution environment when a function is called, with a choice of runtimes including various versions of Java, Node.js, Python, .NET, and Ruby, or a custom runtime. The environment stays running while there are frequent function invocations, and shuts itself down if not required for a period.

The Extensions API allows developers to write code for the three phases of the Lambda lifecycle: the init phase, when the environment starts up; the invoke phase, when functions run; and the shutdown phase, when the environment closes down.

Extensions can run either internally on the execution runtime, for purposes such as instrumenting code, or externally as companion processes, for purposes such as fetching secrets and caching them in the execution environment.

Lambda customer Square, a provider of eCommerce tools, has described how it used the new API to write an extension in Go that improves function startup time by fetching secrets before the runtime starts, and reported around 30-40 percent reduction in cold start time.

Lambda extensions can run as parallel processes to the code on the runtime itselfLambda extensions can run as parallel processes to the code on the runtime itself

Lambda extensions can run as parallel processes to the code on the runtime itself

Extensions are ideal for monitoring function execution on Lambda, and the usual suspects – companies like AppDynamics, DataDog, New Relic and Splunk, which provide monitoring and alerting services – have been quick to use them to integrate with their tools. The newly published API opens up ways for developers to optimise and monitor Lambda deployments using custom code. Extensions are deployed using Lambda layers, a way of packaging function dependencies. The pricing model is the same as for Lambda itself, based on a combination of the number of requests served and the compute time consumed.

Separately, AWS has also previewed CloudWatch Lambda Insights, CloudWatch being its own monitoring service. A multi-function view "provides visibility into issues such as memory leaks or performance changes caused by new function versions". CloudWatch users can enable Lambda Insights with a single click in the AWS console, where it is called Enhanced Monitoring, or via other tools such as the command-line interface (CLI).

The Extensions API is another piece in making Lambda more manageable and complete. Monitoring provider Thundra, another company taking advantage of the new feature, remarked that the "Extensions API will help companies that complain about the limitations of serverless overcome those challenges."

Serverless is the "best abstraction for deploying software", according to some experts, with Lambda the most popular option, though Microsoft has its equivalent in Azure Functions and Google has Cloud Functions. ®



Unlock Ubuntu using your face

posted Sep 20, 2020, 3:14 PM by Chris G   [ updated Sep 20, 2020, 3:14 PM ]

While its been proven a fair few times that with a well crafted photo you can enter most systems with facial recognition. For a personal PC there is something kinda of nice about looking at the webcam unlocking your login session, lock screen or providing sudo access.

Using howdy this is done using very little user tweaking of things like pam.

This will work on most modern linux systems and the guys github page points you in the right direction for most distros

HTTPS Client Certificate Authentication with Sidecar

posted Jul 4, 2020, 12:24 PM by Chris G   [ updated Jul 4, 2020, 12:25 PM ]

From: https://medium.com/@zhimin.wen/https-client-certificate-authentication-with-sidecar-9b07d82a6389

Image from epicurious.com

This paper is a continuous exploration of enabling HTTPS for the app without https implemented. (The first paper can be reached here.) Here we will enable client certificate authentication for a non-https app using the sidecar pattern.

When client certificate authentication is turned on, the client HTTPS connection must submit with a valid cert that signed by the CA. Otherwise, the connection will be rejected.

In the last part of the paper, we examine the Prometheus in IBM Cloud Private which is using the same https sidecar pattern.

Steps to setup client certificate authentication

First, we enable the client certificate authentication by adding the following lines in the nginx.conf file as what we have in the first paper.

...
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /app/cert/hello-server.pem;
ssl_certificate_key /app/cert/hello-server-key.pem;

ssl_client_certificate /app/cert/hello-server-ca.pem;
ssl_verify_client on;
ssl_protocols TLSv1.2;
...

When ssl_verify_client is set on, the ssl_client_certificate need to be set as the CA cert that is used to sign the server and client cert. If a client doesn’t use a cert signed by this CA, the https connection will be rejected.

Secondly, we create the K8s secret with all the certs required,

kubectl create secret generic hello-sidecar-nginx-certs --from-file=hello-server-cert=./hello-server.pem --from-file=hello-server-key=./hello-server-key.pem --from-file=hello-server-ca-cert=./myca.pem

Then update the K8s deployment file, to mount the CA to the Nginx container.

...
spec:
containers:
- name: hello
image: zhiminwen/hello:v1
imagePullPolicy: IfNotPresent
env:
- name: LISTENING_PORT
value: "8080"
- name: tls-sidecar
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /app/cert
- name: config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: secret-volume
secret:
secretName: hello-sidecar-nginx-certs
items:
- key: hello-server-cert
path: hello-server.pem
- key: hello-server-key
path: hello-server-key.pem
- key: hello-server-ca-cert
path: hello-server-ca.pem
- name: config-volume
configMap:
name: hello-sidecar-nginx-conf

Apply the updated yaml file. Now the application is HTTPS client certificate authentication enabled.

Positive and Negative Test

Test without the valid cert.

curl -k https://192.168.64.244:31463/date
<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx/1.15.7</center>
</body>
</html>

To test with the valid cert, let’s generate the cert signed by the CA first.

Create a json file as below, save it as “clientRequest.json”

{
"CN": "client-for-hello-server",
"hosts": [
""
],
"key": {
"algo": "rsa",
"size": 2048
}
}

Generate the client cert with the client profile (as defined in the first paper).

cd certscfssl gencert -ca=myca.pem -ca-key=myca-key.pem -config=ca-config.json -profile=client -hostname="127.0.0.1" clientRequest.json | cfssljson -bare hello-client

You will have

  • hello-client.pem, the public key
  • hello-client-key.pem, the private key

Test with these keys,

curl -k --cert certs\hello-client.pem --key certs\hello-client-key.pem https://192.168.64.244:31463/date
time now: 03:36:36

At this point, we implemented the HTTPS with client cert authentication for the non-https application using the K8s sidecar pattern.

ICP Prometheus

By default, Prometheus doesn’t provide any HTTP/TLS capability. IBM Cloud Private, ICP, uses the sidecar technique to enable the HTTPS/TLS and client certificate authentication.

Attached is part of the result of kubectl -n kube-system describe pods monitoring-prometheus-74c6d846d7-plb2n

The sidecar container, router, in the pods

router:
Container ID: docker://04b9191d822ddb7f14e063e264afaeb2299d6d846777d230b17ec7404f92bade
Image: devcluster.icp:8500/ibmcom/icp-management-ingress:2.2.2
Image ID: docker-pullable://devcluster.icp:8500/ibmcom/icp-management-ingress@sha256:c6e8be6e465e69b0d9e045a78a42ec22ef9e79d1886dfce691b9e7f1e9738d6a
Port: 8080/TCP
Host Port: 0/TCP
Command:
/opt/ibm/router/entry/entrypoint.sh
State: Running
Started: Wed, 28 Nov 2018 15:16:22 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/opt/ibm/router/caCerts from monitoring-ca-certs (rw)
/opt/ibm/router/certs from monitoring-certs (rw)
/opt/ibm/router/conf from router-config (rw)
/opt/ibm/router/entry from router-entry (rw)
/opt/ibm/router/lua-scripts from lua-scripts-config-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tvb2h (ro)

The nginx.conf can be found by running kubectl -n kube-system describe cm monitoring-prometheus-router-nginx-config

The server block excerpt is listed as below,

server {
listen 8443 ssl default_server;
ssl_certificate server.crt;
ssl_certificate_key server.key;
ssl_client_certificate /opt/ibm/router/caCerts/tls.crt;
ssl_verify_client on;
ssl_protocols TLSv1.2;
# Modulo ChaCha20 cipher.
ssl_ciphers EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:!EECDH+3DES:!RSA+3DES:!MD5;
ssl_prefer_server_ciphers on;
...

Because of the ssl_verify_client setting, the client which needs to contact to Prometheus must, therefore, use the certificate signed by the CA cert.

The location block, which redirects the traffic to the Prometheus, is shown below,

location /federate {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:9090/prometheus/federate;
}
location /api/v1/series {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
if ($arg_match[] = "helm_release_info") {
content_by_lua 'rewrite.write_release_response()';
}
rewrite_by_lua 'rewrite.rewrite_query()';
proxy_pass http://127.0.0.1:9090/prometheus/api/v1/series;
}location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;rewrite_by_lua 'rewrite.rewrite_query()';
proxy_pass http://127.0.0.1:9090/prometheus/;
}

The API call to Prometheus will be redirected to the Prometheus container in the same pod.

Conclusion

These two papers explore the sidecar pattern for HTTPS and client certificate authentication. With the knowledge, it is useful to understand the ICP Prometheus and to further extend Prometheus functionality.

InnerSource Commons

posted Jun 20, 2020, 1:37 PM by Chris G   [ updated Jun 20, 2020, 1:38 PM ]

What is the InnerSource Commons?

The InnerSource Commons (ISC) is a growing community of practitioners with the goal of creating and sharing knowledge about InnerSource: the use of open source best practices for software development within the confines of an organization. Founded in 2015, the InnerSource Commons is now supporting and connecting over seventy companies, academic institutions, and government agencies.

The InnerSource Commons supports practitioners and those who want to learn about inner source by a broad array of activities. It provides learning paths on how to get started with inner source, curates known best practices in the form of patterns, facilitates discussion on the inner source values and principles that will lead to an inner source manifesto, and organizes the leading practitioner conference dedicated to inner source - the twice-yearly InnerSource Commons Summit.

To get started, simply join the growing ISC community via our slack channel and introduce yourself:

What is InnerSource?

InnerSource takes the lessons learned from developing open source software and applies them to the way companies develop software internally. As developers have become accustomed to working on world class open source software, there is a strong desire to bring those practices back inside the firewall and apply them to software that companies may be reluctant to release. For companies building mostly closed source software, InnerSource can be a great tool to help break down silos, encourage internal collaboration, accelerate new engineer on-boarding, and identify opportunities to contribute software back to the open source world.

Introduction

“Inspired by the spread of open source software throughout the areas of operating systems, cloud computing, JavaScript frameworks, and elsewhere, a number of companies are mimicking the practices of the powerful open source movement to create an internal company collaboration under the rubric InnerSource. In these pages you’ll read about the experience of the leading Internet commerce facilitator PayPal, and see how inner source can benefit engineers, management, and marketing/PR departments.

“To understand the appeal of InnerSource project management, consider what has made open source software development so successful:

  • Programmers share their work with a wide audience, instead of just with a manager or team. In most open source projects, anyone in the world is free to view the code, comment on it, learn new skills by examining it, and submit changes that they think will improve it or customize it to their needs.
  • New code repositories (branches) based on the project can be made freely, so that sites with unanticipated uses for the code can adapt it. There are usually rules and technical support for re-merging different branches into the original master branch.
  • People at large geographical distances, at separate times, can work on the same code or contribute different files of code to the same project.
  • Communication tends to be written and posted to public sites instead of shared informally by word of mouth, which provides a history of the project as well as learning opportunities for new project members.
  • Writing unit tests becomes a key programming task. A “unit test” is a small test that checks for a particular, isolated behavior such as rejecting incorrect input or taking the proper branch under certain conditions. In open source and inner source, testing is done constantly as changes are checked in, to protect against failures during production runs.

“InnerSource differs from classic open source by remaining within the view and control of a single organization. The “openness” of the project extends across many teams within the organization. This allows the organization to embed differentiating trade secrets into the code without fear that they will be revealed to outsiders, while benefitting from the creativity and diverse perspectives contributed by people throughout the organization. Often, the organization chooses to share parts of an InnerSource project with the public, effectively turning them into open source. When the technologies and management practices of open source are used internally, moving the project into a public arena becomes much easier.”

Oram, A. (2015) Getting Started With InnerSource. San Francisco: O’Reilly Media. Get your free copy at http://www.oreilly.com/programming/free/getting-started-with-innersource.csp

Dual Boot is Dead: Windows and Linux are now One.

posted Jun 20, 2020, 1:07 PM by Chris G   [ updated Jun 20, 2020, 1:07 PM ]

Dual Boot is Dead: Windows and Linux are now One.

Turn your Windows machine into a developer workstation with WSL 2.


I started building a machine learning workstation; a great CPU, lots of RAM, and a competent GPU, among others. My OS of choice for almost anything was Ubuntu, except I needed Microsoft Office for proposal writing. Office online is just not there yet and, let’s face it, LibreOffice is a disaster. So, the solution was to dual boot Ubuntu and Windows 10. The freedom you experience moving from Apple to Ubuntu is unparalleled, and the options you have building your own PC are almost infinite.

Dual boot was the answer for a long time. One million of context switches later, WSL came. Thus, I started moving a portion of my workflow to Windows. But still, there were many things missing. However, WSL 2 seems to be a game-changer. In this story, I will show you how to move your development workflow to Windows 10 and WSL 2, its new features, and what to expect in the near future.


What is WSL 2

WSL 2 is the new version of the architecture in WSL. This version comes with several changes that dictate how Linux distributions interact with Windows.

With this release, you get increased file system performance and a full system call compatibility. Of course, you can choose to run your Linux distribution as either WSL 1 or WSL 2, and, moreover, you can switch between those versions at any time. WSL 2 is a major overhaul of the underlying architecture and uses virtualization technology and a Linux kernel to enable its new features. But Microsoft handles the nitty-gritty details so you can focus on what matters.

Installation

Microsoft promises a smooth installation experience in the near future for WSL 2 and the ability to update the Linux kernel via Windows updates. For now, the installation process is a bit more involved but nothing scary.



https://towardsdatascience.com/dual-boot-is-dead-windows-and-linux-are-now-one-27555902a128

The C4 model for visualising software architecture

posted Jun 3, 2020, 6:04 PM by Chris G   [ updated Jun 3, 2020, 6:05 PM ]


The C4 model for visualising software architecture

Context, Containers, Components and Code

Ask somebody in the building industry to visually communicate the architecture of a building and you'll be presented with site plans, floor plans, elevation views, cross-section views and detail drawings. In contrast, ask a software developer to communicate the software architecture of a software system using diagrams and you'll likely get a confused mess of boxes and lines ... inconsistent notation (colour coding, shapes, line styles, etc), ambiguous naming, unlabelled relationships, generic terminology, missing technology choices, mixed abstractions, etc.

As an industry, we do have the Unified Modeling Language (UML), ArchiMate and SysML, but asking whether these provide an effective way to communicate software architecture is often irrelevant because many teams have already thrown them out in favour of much simpler "boxes and lines" diagrams. Abandoning these modelling languages is one thing but, perhaps in the race for agility, many software development teams have lost the ability to communicate visually.

Maps of your code

The C4 model was created as a way to help software development teams describe and communicate software architecture, both during up-front design sessions and when retrospectively documenting an existing codebase. It's a way to create maps of your code, at various levels of detail, in the same way you would use something like Google Maps to zoom in and out of an area you are interested in.









C4-PlantUML

Container diagram for Internet Banking System

C4-PlantUML combines the benefits of PlantUML and the C4 model for providing a simple way of describing and communicate software architectures - especially during up-front design sessions - with an intuitive language using open source and platform independent tools.

C4-PlantUML includes macros, stereotypes, and other goodies (like VSCode Snippets) for creating C4 diagrams with PlantUML.




Kubernetes From Scratch

posted May 5, 2020, 1:52 PM by Chris G   [ updated May 5, 2020, 1:53 PM ]

Kubernetes From Scratch (Part 2)

Kubernetes without Minikube or MicroK8s

Photo by Sven Mieke on Unsplash

In my article in this series, “Kubernetes from Scratch,” I discussed a minimal Kubernetes system. Now I’d like to add to that success by making it a more complete system.

If you get Kubernetes from a cloud provider, things like storage and Ingress are most likely provided. The core Kubernetes system doesn’t provide things like Ingress, as that’s something that should integrate closely with the cloud system it’s running on.

To follow along, you should have read “Kubernetes from Scratch” and built up the system described. The system we built is four nodes running in VMs on a bare-metal server. As long as you have a similar setup, you should be able to follow along with minor adjustments. The cluster nodes are named kube1kube2kube3, and kube4. The kube1 node is the master, and the rest are workers. The main host is called beast and is running Ubuntu 20.04, and the VMs are running Ubuntu 18.04.

Also required for the second half of this article is a storage server we built in my article “Build Your Own In-Home Cloud Storage.” That server is running Ubuntu 20.04 on bare metal and has GlusterFS installed.

https://medium.com/better-programming/kubernetes-from-scratch-part-2-e30b48f7ca6b



Managing your Stateful Workloads in Kubernetes

posted May 5, 2020, 1:48 PM by Chris G   [ updated May 5, 2020, 1:49 PM ]

Managing your Stateful Workloads in Kubernetes

Introduction

Kubernetes as we know, is currently the most popular container orchestration tool used to Scale, Deploy and Manage containerised applications. In its initial days, kubernetes was mostly used to run web-based stateless services.

However if you ever wanted to run stateful services like a database, you either had to run them in virtual machines (VM) or as a cloud-service. But with the rise of the kubernetes based hybrid-cloud, many users want to deploy stateful workloads also on top of kubernetes based clusters.


Stateless and Stateful Workloads :

The Kubernetes sweet-spot is running stateless services and applications, which can be scaled horizontally. By keeping state out of applications, Kubernetes can seamlessly add, remove,restart and delete pods to keep your services healthy and scalable. Developing a stateless application is, without question, the easiest way to ensure that your app can scale with Kubernetes.

A key point to keep in mind is that statefulness requires persistent storage. An application can only be stateful if it has a place to store information about its state, and that information should be available on demand to read in future.



QRCode Monkey

posted Apr 13, 2020, 9:24 AM by Chris G   [ updated Apr 13, 2020, 9:25 AM ]


https://www.qrcode-monkey.com/

THE 100% FREE QR CODE GENERATOR


The Free QR Code Generator for High-Quality QR Codes

QRCode Monkey is one of the most popular free online qr code generators with millions of already created QR codes. The high resolution of the QR codes and the powerful design options make it one of the best free QR code generators on the web that can be used for commercial and print purposes.




ngrok

posted Aug 28, 2019, 7:14 AM by Chris G


ngrok secure introspectable tunnels to localhost webhook development tool and debugging tool.

1-10 of 40