System Tools
AWS cooks up Extensions API for Lambda serverless platform: Useful for monitoring, alerting
Cloud computing behemoth Amazon Web Services, has pushed out an Extensions API for its Lambda serverless platform that lets developers write custom code to handle lifecycle events – such as when the environment starts, invokes functions and shuts down. AWS Lambda runs functions on demand. It works by firing up an execution environment when a function is called, with a choice of runtimes including various versions of Java, Node.js, Python, .NET, and Ruby, or a custom runtime. The environment stays running while there are frequent function invocations, and shuts itself down if not required for a period. The Extensions API allows developers to write code for the three phases of the Lambda lifecycle: the init phase, when the environment starts up; the invoke phase, when functions run; and the shutdown phase, when the environment closes down. Extensions can run either internally on the execution runtime, for purposes such as instrumenting code, or externally as companion processes, for purposes such as fetching secrets and caching them in the execution environment. Lambda customer Square, a provider of eCommerce tools, has described how it used the new API to write an extension in Go that improves function startup time by fetching secrets before the runtime starts, and reported around 30-40 percent reduction in cold start time. Extensions are ideal for monitoring function execution on Lambda, and the usual suspects – companies like AppDynamics, DataDog, New Relic and Splunk, which provide monitoring and alerting services – have been quick to use them to integrate with their tools. The newly published API opens up ways for developers to optimise and monitor Lambda deployments using custom code. Extensions are deployed using Lambda layers, a way of packaging function dependencies. The pricing model is the same as for Lambda itself, based on a combination of the number of requests served and the compute time consumed. Separately, AWS has also previewed CloudWatch Lambda Insights, CloudWatch being its own monitoring service. A multi-function view "provides visibility into issues such as memory leaks or performance changes caused by new function versions". CloudWatch users can enable Lambda Insights with a single click in the AWS console, where it is called Enhanced Monitoring, or via other tools such as the command-line interface (CLI). The Extensions API is another piece in making Lambda more manageable and complete. Monitoring provider Thundra, another company taking advantage of the new feature, remarked that the "Extensions API will help companies that complain about the limitations of serverless overcome those challenges." Serverless is the "best abstraction for deploying software", according to some experts, with Lambda the most popular option, though Microsoft has its equivalent in Azure Functions and Google has Cloud Functions. ® |
Unlock Ubuntu using your face
While its been proven a fair few times that with a well crafted photo you can enter most systems with facial recognition. For a personal PC there is something kinda of nice about looking at the webcam unlocking your login session, lock screen or providing sudo access. Using howdy this is done using very little user tweaking of things like pam. This will work on most modern linux systems and the guys github page points you in the right direction for most distros |
HTTPS Client Certificate Authentication with Sidecar
From: https://medium.com/@zhimin.wen/https-client-certificate-authentication-with-sidecar-9b07d82a6389 ![]() This paper is a continuous exploration of enabling HTTPS for the app without https implemented. (The first paper can be reached here.) Here we will enable client certificate authentication for a non-https app using the sidecar pattern. When client certificate authentication is turned on, the client HTTPS connection must submit with a valid cert that signed by the CA. Otherwise, the connection will be rejected. In the last part of the paper, we examine the Prometheus in IBM Cloud Private which is using the same https sidecar pattern. Steps to setup client certificate authenticationFirst, we enable the client certificate authentication by adding the following lines in the nginx.conf file as what we have in the first paper. ... When ssl_verify_client is set on, the ssl_client_certificate need to be set as the CA cert that is used to sign the server and client cert. If a client doesn’t use a cert signed by this CA, the https connection will be rejected. Secondly, we create the K8s secret with all the certs required, kubectl create secret generic hello-sidecar-nginx-certs --from-file=hello-server-cert=./hello-server.pem --from-file=hello-server-key=./hello-server-key.pem --from-file=hello-server-ca-cert=./myca.pem Then update the K8s deployment file, to mount the CA to the Nginx container. ... Apply the updated yaml file. Now the application is HTTPS client certificate authentication enabled. Positive and Negative TestTest without the valid cert. curl -k https://192.168.64.244:31463/date To test with the valid cert, let’s generate the cert signed by the CA first. Create a json file as below, save it as “clientRequest.json” { Generate the client cert with the client profile (as defined in the first paper). cd certscfssl gencert -ca=myca.pem -ca-key=myca-key.pem -config=ca-config.json -profile=client -hostname="127.0.0.1" clientRequest.json | cfssljson -bare hello-client You will have
Test with these keys, curl -k --cert certs\hello-client.pem --key certs\hello-client-key.pem https://192.168.64.244:31463/date At this point, we implemented the HTTPS with client cert authentication for the non-https application using the K8s sidecar pattern. ICP PrometheusBy default, Prometheus doesn’t provide any HTTP/TLS capability. IBM Cloud Private, ICP, uses the sidecar technique to enable the HTTPS/TLS and client certificate authentication. Attached is part of the result of The sidecar container, router, in the pods router: The nginx.conf can be found by running The server block excerpt is listed as below, server { Because of the ssl_verify_client setting, the client which needs to contact to Prometheus must, therefore, use the certificate signed by the CA cert. The location block, which redirects the traffic to the Prometheus, is shown below, location /federate { The API call to Prometheus will be redirected to the Prometheus container in the same pod. ConclusionThese two papers explore the sidecar pattern for HTTPS and client certificate authentication. With the knowledge, it is useful to understand the ICP Prometheus and to further extend Prometheus functionality. |
InnerSource Commons
What is the InnerSource Commons?The InnerSource Commons (ISC) is a growing community of practitioners with the goal of creating and sharing knowledge about InnerSource: the use of open source best practices for software development within the confines of an organization. Founded in 2015, the InnerSource Commons is now supporting and connecting over seventy companies, academic institutions, and government agencies. The InnerSource Commons supports practitioners and those who want to learn about inner source by a broad array of activities. It provides learning paths on how to get started with inner source, curates known best practices in the form of patterns, facilitates discussion on the inner source values and principles that will lead to an inner source manifesto, and organizes the leading practitioner conference dedicated to inner source - the twice-yearly InnerSource Commons Summit. To get started, simply join the growing ISC community via our slack channel and introduce yourself: What is InnerSource?InnerSource takes the lessons learned from developing open source software and applies them to the way companies develop software internally. As developers have become accustomed to working on world class open source software, there is a strong desire to bring those practices back inside the firewall and apply them to software that companies may be reluctant to release. For companies building mostly closed source software, InnerSource can be a great tool to help break down silos, encourage internal collaboration, accelerate new engineer on-boarding, and identify opportunities to contribute software back to the open source world. Introduction“Inspired by the spread of open source software throughout the areas of operating systems, cloud computing, JavaScript frameworks, and elsewhere, a number of companies are mimicking the practices of the powerful open source movement to create an internal company collaboration under the rubric InnerSource. In these pages you’ll read about the experience of the leading Internet commerce facilitator PayPal, and see how inner source can benefit engineers, management, and marketing/PR departments. “To understand the appeal of InnerSource project management, consider what has made open source software development so successful:
“InnerSource differs from classic open source by remaining within the view and control of a single organization. The “openness” of the project extends across many teams within the organization. This allows the organization to embed differentiating trade secrets into the code without fear that they will be revealed to outsiders, while benefitting from the creativity and diverse perspectives contributed by people throughout the organization. Often, the organization chooses to share parts of an InnerSource project with the public, effectively turning them into open source. When the technologies and management practices of open source are used internally, moving the project into a public arena becomes much easier.” Oram, A. (2015) Getting Started With InnerSource. San Francisco: O’Reilly Media. Get your free copy at http://www.oreilly.com/programming/free/getting-started-with-innersource.csp |
Dual Boot is Dead: Windows and Linux are now One.
I started building a machine learning workstation; a great CPU, lots of RAM, and a competent GPU, among others. My OS of choice for almost anything was Ubuntu, except I needed Microsoft Office for proposal writing. Office online is just not there yet and, let’s face it, LibreOffice is a disaster. So, the solution was to dual boot Ubuntu and Windows 10. The freedom you experience moving from Apple to Ubuntu is unparalleled, and the options you have building your own PC are almost infinite. Dual boot was the answer for a long time. One million of context switches later, WSL came. Thus, I started moving a portion of my workflow to Windows. But still, there were many things missing. However, WSL 2 seems to be a game-changer. In this story, I will show you how to move your development workflow to Windows 10 and WSL 2, its new features, and what to expect in the near future. What is WSL 2WSL 2 is the new version of the architecture in WSL. This version comes with several changes that dictate how Linux distributions interact with Windows. With this release, you get increased file system performance and a full system call compatibility. Of course, you can choose to run your Linux distribution as either WSL 1 or WSL 2, and, moreover, you can switch between those versions at any time. WSL 2 is a major overhaul of the underlying architecture and uses virtualization technology and a Linux kernel to enable its new features. But Microsoft handles the nitty-gritty details so you can focus on what matters. InstallationMicrosoft promises a smooth installation experience in the near future for WSL 2 and the ability to update the Linux kernel via Windows updates. For now, the installation process is a bit more involved but nothing scary. |
The C4 model for visualising software architecture
Ask somebody in the building industry to visually communicate the architecture of a building and you'll be presented with site plans, floor plans, elevation views, cross-section views and detail drawings. In contrast, ask a software developer to communicate the software architecture of a software system using diagrams and you'll likely get a confused mess of boxes and lines ... inconsistent notation (colour coding, shapes, line styles, etc), ambiguous naming, unlabelled relationships, generic terminology, missing technology choices, mixed abstractions, etc. As an industry, we do have the Unified Modeling Language (UML), ArchiMate and SysML, but asking whether these provide an effective way to communicate software architecture is often irrelevant because many teams have already thrown them out in favour of much simpler "boxes and lines" diagrams. Abandoning these modelling languages is one thing but, perhaps in the race for agility, many software development teams have lost the ability to communicate visually. Maps of your codeThe C4 model was created as a way to help software development teams describe and communicate software architecture, both during up-front design sessions and when retrospectively documenting an existing codebase. It's a way to create maps of your code, at various levels of detail, in the same way you would use something like Google Maps to zoom in and out of an area you are interested in. C4-PlantUMLC4-PlantUML combines the benefits of PlantUML and the C4 model for providing a simple way of describing and communicate software architectures - especially during up-front design sessions - with an intuitive language using open source and platform independent tools. C4-PlantUML includes macros, stereotypes, and other goodies (like VSCode Snippets) for creating C4 diagrams with PlantUML. |
Kubernetes From Scratch
Kubernetes without Minikube or MicroK8s![]() In my article in this series, “Kubernetes from Scratch,” I discussed a minimal Kubernetes system. Now I’d like to add to that success by making it a more complete system. If you get Kubernetes from a cloud provider, things like storage and Ingress are most likely provided. The core Kubernetes system doesn’t provide things like Ingress, as that’s something that should integrate closely with the cloud system it’s running on. To follow along, you should have read “Kubernetes from Scratch” and built up the system described. The system we built is four nodes running in VMs on a bare-metal server. As long as you have a similar setup, you should be able to follow along with minor adjustments. The cluster nodes are named Also required for the second half of this article is a storage server we built in my article “Build Your Own In-Home Cloud Storage.” That server is running Ubuntu 20.04 on bare metal and has GlusterFS installed. https://medium.com/better-programming/kubernetes-from-scratch-part-2-e30b48f7ca6b |
Managing your Stateful Workloads in Kubernetes
QRCode Monkey