How to Deliver a Successful Technical Presentation: From Zero to Hero

Wednesday, 12 October, 2022

Introduction

I had the chance to talk about Predictive Autoscaling Patterns with Kubernetes at the Container Days 22 Conference in September of 2022.  I delivered the talk with a former colleague in Hamburg, Germany, and was an outstanding experience! The entire process of delivering the talk began when the Call for Papers opened back in March 2022. My colleague and I worked together, playing with the technology, better understanding the components and preparing the labs. 

In this article, I will discuss my experiences, lessons learned and suggestions for providing a successful technical presentation. 

My Experiences

As a Cloud Consultant in a previous role, I have attended events, such as the CNCF KubeCon and the Open Source Infra Summit. I also helped in workshops, serving as a booth staff performing demos and introducing the product to the attendees. Public speaking was something that always piqued my interest, but I didn’t know where to start. 

One of my previous duties was to provide technical expertise to customers and help sales organizations identify potential solutions and create workshops to work with the customers. Doing this gave me a unique opportunity to introduce myself to the process of speaking; I found it interesting and a great source of self-reflection.

Developing communication skills is not something you can learn just by taking a training course or listening to others doing it. I consider rehearsal mandatory, as I always learn something new every time. However, the best way to develop communication skills is to deliver content. 

How to Select the Right Topic 

Selecting the right topic for a speech is one of the first things you should consider. The topic should be a mix of something you are comfortable with and something you have enough technical background knowledge of; it does not need to be work-related, just something you find interesting and want to discuss. 

I delivered a talk with a former colleague, Roberto Carratalá, who works for a competitor. Right now, some of the most-used technologies (Kubernetes, its SIGs, programming languages, Kubevirt and many others) are open sourced projects with no direct companies involved. Talking about the technologies can open new windows to selecting an agnostic topic you and your co-speaker could discuss. Don’t let companies’ differences get in your way of providing a great talk.

In our case, we decided to move on with Vertical Pod Autoscaler (VPA) and our architecture around it. We utilized examples and created use cases to showcase. It is important to narrow down the concept to real use cases so the audience can link with their own use cases, and it can also serve as a baseline for the audience to adapt to their customers. 

VPA is a technology-agnostic vendor that can be used within a vendor distribution with minimum changes. You could consider talking about this technology, which can be applied to a vendor-specific product. 

Whether you are an Engineer, Project Manager, Architect, Consultant or hold a non-technical role, we are all involved in IT. Within your area of specialization, you can talk about your experiences, what you learned, how you performed or even the challenges you faced explaining the process.

From “How to contribute to an Open Source project” to “How to write eBPF programs with Golang,” a different audience will be called. 

Here are some ideas: 

  • Have you recently had a good experience with a tool or project and want to share your experiences? 
  • Did you overcome a downtime situation with your customer? What a good experience to share! 
  • Business challenges and how you faced them. 
  • Are you a maintainer or contributor to a project? Take your chance and generate some hype among developers about your project. 

The bottom line is to not underestimate yourself and share your experiences; we all grow when we share! 

Practice Makes Perfect

In my experience, taking the time to practice and record yourself is important. Every time I reviewed my own recording, I found opportunities for improvement. Rehearse your delivery!

I had to understand that there is no “perfect word” to use; there is no better way to explain yourself than when you feel comfortable speaking about the topic. Use language you are comfortable with, and the audience will appreciate your understanding. 

Repeat your talk, stand up and try to feel comfortable while you’re speaking. Become familiar with the sound of your voice and the content flow. Once you feel comfortable enough, deliver to your partner, your family or even close friends. It was a wonderful opportunity to get initial feedback in a friendly environment, which greatly helped me.

The Audience 

Talking to hundreds or even thousands of attendees is a great challenge but can be frightening. Try to remember that all these people are there because they’re interested in the content you created. They are not expecting to become experts after the talk, nor do they want or expect you to fail. Don’t be afraid to find ‘your’ space on the stage so that you feel more comfortable. Always tell the audience that you’re excited to be at the event and looking forward to sharing your knowledge and experience with them. Speak freely, and remember to have fun while you do! 

Own the content; a speech is not a script. Don’t expect to remember every word that you wrote because it will feel very wooden. Try to riff on your content – evolve it every time it’s delivered, sharpening the emphasis of certain sections or dropping in a bit of humor along the way.  Make sure each time you give the speech it’s a unique experience. 

The Conference 

The time has come: I overcame the lack of self-confidence and all the doubts. It was time to polish up the final details before giving the speech. 

First, I found it useful to familiarize myself with the speaking room. If you are not told to stay in the same place (like a lectern or a marked spot on the stage), spend some time walking around the room, looking at the empty chairs, imagining yourself delivering the speech, and breathe slowly and deeply to reduce any anxiety that you feel. 

While delivering a talk is not 100% a conversation, attempt to talk to the audience; don’t focus on the first few rows and forget about the rest of the auditorium. Look at different parts of the audience when you are talking, make eye contact with them and ask questions. If possible, try to make it interactive. 

The last part of the speech usually consists of a question-and-answer section. One of the most common fears is around “what if they ask something I don’t know?” Remember that no one expects you to know everything, so don’t be afraid to recognize you don’t know something. Some questions can be tricky or too long to answer; just calm down and point to the right resources where they can find the answers from the source directly. 

We got many questions, which shocked me because that proved that the audience was interested.  It was fun to answer many questions and interact with the audience. 

Don’t be in a rush, talk about the content and take your time to breathe while you are speaking. Remind yourself you wrote the content, you own the content and nobody was forced to attend your talk; they attended freely because your content is worth it!  

Conclusion 

Overall, my speaking experiences were outstanding! I delivered mine with my former colleague and friend Roberto Carratalá, and we both really enjoyed the experience. We received good feedback, including some improvements to consider for our future speeches. 

I will submit to the next call for papers, whether it is standalone or co-speaking. So get out there and get speaking!

Meet Epinio: The Application Development Engine for Kubernetes

Tuesday, 4 October, 2022

Epinio is a Kubernetes-powered application development engine. Adding Epinio to your cluster creates your own platform-as-a-service (PaaS) solution in which you can deploy apps without setting up infrastructure yourself.

Epinio abstracts away the complexity of Kubernetes so you can get back to writing code. Apps are launched by pushing their source directly to the platform, eliminating complex CD pipelines and Kubernetes YAML files. You move directly to a live instance of your system that’s accessible at a URL.

This tutorial will show you how to install Epinio and deploy a simple application.

Prerequisites

You’ll need an existing Kubernetes cluster to use Epinio. You can start a local cluster with a tool like K3sminikubeRancher Desktop or with any managed service such as Azure Kubernetes Service (AKS) or Google Kubernetes Engine (GKE).

You must have the following tools to follow along with this guide:

Install them from the links above if they’re missing from your system. You don’t need these to use Epinio, but they are required for the initial installation procedure.

The steps in this guide have been tested with K3s v1.24 (Kubernetes v1.24) and minikube v1.26 (Kubernetes v1.24) on a Linux host. Additional steps may be required to run Epinio in other environments.

What Is Epinio?

Epinio is an application platform that offers a simplified development experience by using Kubernetes to automatically build and deploy your apps. It’s like having your own PaaS solution that runs in a Kubernetes cluster you can control.

Using Epinio to run your apps lets you focus on the logic of your business functions instead of tediously configuring containers and Kubernetes objects. Epinio will automatically work out which programming languages you use, build an appropriate image with a Paketo Buildpack and launch your containers inside your Kubernetes cluster. You can optionally use your own image if you’ve already got one available.

Developer experience (DX) is a hot topic because good tools reduce stress, improve productivity and encourage engineers to concentrate on their strengths without being distracted by low-level components. A simpler app deployment experience frees up developers to work on impactful changes. It also promotes experimentation by allowing new app instances to be rapidly launched in staging and test environments.

Epinio Tames Developer Workflows

Epinio is purpose-built to enhance development workflows by handling deployment for you. It’s quick to set up, simple to use and suitable for all environments from your own laptop to your production cloud. New apps can be deployed by running a single command, removing the hours of work required if you were to construct container images and deployment pipelines from scratch.

While Epinio does a lot of work for you, it’s also flexible in how apps run. You’re not locked into the platform, unlike other PaaS solutions. Because Epinio runs within your own Kubernetes cluster, operators can interact directly with Kubernetes to monitor running apps, optimize cluster performance and act on problems. Epinio is a developer-oriented layer that imbues Kubernetes with greater ease of use.

The platform is compatible with most Kubernetes environments. It’s edge-friendly and capable of running with 2 vCPUs and 4 GB of RAM. Epinio currently supports Kubernetes versions 1.20 to 1.23 and is tested with K3s, k3d, minikube and Rancher Desktop.

How Does Epinio Work?

Epinio wraps several Kubernetes components in higher-level abstractions that allow you to push code straight to the platform. Your Epinio installation inspects your source, selects an appropriate buildpack and creates Kubernetes objects to deploy your app.

The deployment process is fully automated and handled entirely by Epinio. You don’t need to understand containers or Kubernetes to launch your app. Pushing up new code sets off a sequence of actions that allows you to access the project at a public URL.

Epinio first compresses your source and uploads the archive to a MinIO object storage server that runs in your cluster. It then “stages” your application by matching its components to a Paketo Buildpack. This process produces a container image that can be used with Kubernetes.

Once Epinio is installed in your cluster, you can interact with it using the CLI. Epinio also comes with a web UI for managing your applications.

Installing Epinio

Epinio is usually installed with its official Helm chart. This bundles everything needed to run the system, although there are still a few prerequisites.

Before deploying Epinio, you must have an ingress controller available in your cluster. NGINX and Traefik provide two popular options. Ingresses let you expose your applications using URLs instead of raw hostnames and ports. Epinio requires your apps to be deployed with a URL, so it won’t work without an ingress controller. New deployments automatically generate a URL, but you can manually assign one instead. Most popular single-node Kubernetes distributions such as K3s,minikube and Rancher Desktop come with one either built-in or as a bundled add-on.

You can manually install the Traefik ingress controller if you need to by running the following commands:

$ helm repo add traefik https://helm.traefik.io/traefik
$ helm repo update
$ helm install traefik --create-namespace --namespace traefik traefik/traefik

You can skip this step if you’re following along using minikube or K3s.

Preparing K3s

Epinio on K3s doesn’t have any special prerequisites. You’ll need to know your machine’s IP address, though—use it instead of 192.168.49.2 in the following examples.

Preparing minikube

Install the official minikube ingress add-on before you try to run Epinio:

$ minikube addons enable ingress

You should also double-check your minikube IP address with minikube ip:

$ minikube ip
192.168.49.2

Use this IP address instead of 192.168.49.2 in the following examples.

Installing Epinio on K3s or minikube

Epinio needs cert-manager so it can automatically acquire TLS certificates for your apps. You can install cert-manager using its own Helm chart:

$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install cert-manager --create-namespace --namespace cert-manager jetstack/cert-manager --set installCRDs=true

All other components are included with Epinio’s Helm chart. Before you continue, set up a domain to use with Epinio. It needs to be a wildcard where all subdomains resolve back to the IP address of your ingress controller or load balancer. You can use a service such as sslip.io to set up a magic domain that fulfills this requirement while running Epinio locally. sslip.io runs a DNS service that resolves to the IP address given in the hostname used for the query. For instance, any request to *.192.168.49.2.sslip.io will resolve to 192.168.49.2.

Next, run the following commands to add Epinio to your cluster. Change the value of global.domain if you’ve set up a real domain name:

$ helm repo add epinio https://epinio.github.io/helm-charts
$ helm install epinio --create-namespace --namespace epinio epinio/epinio --set global.domain=192.168.49.2.sslip.io

You should get an output similar to the following. It provides information about the Helm chart deployment and some getting started instructions from Epinio.

NAME: epinio
LAST DEPLOYED: Fri Aug 19 17:56:37 2022
NAMESPACE: epinio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To interact with your Epinio installation download the latest epinio binary from https://github.com/epinio/epinio/releases/latest.

Login to the cluster with any of these:

    `epinio login -u admin https://epinio.192.168.49.2.sslip.io`
    `epinio login -u epinio https://epinio.192.168.49.2.sslip.io`

or go to the dashboard at: https://epinio.192.168.49.2.sslip.io

If you didn't specify a password, the default one is `password`.

For more information about Epinio, feel free to check out https://epinio.io/ and https://docs.epinio.io/.

Epinio is now installed and ready to use. If you hit a problem and Epinio doesn’t start, refer to the documentation to check any specific steps required for compatibility with your Kubernetes distribution.

Installing the CLI

Install the Epinio CLI from the project’s GitHub releases page. It’s available as a self-contained binary for Linux, Mac and Windows. Download the appropriate binary and move it into a location on your PATH:

$ wget https://github.com/epinio/epinio/releases/epinio-linux-x86_64
$ sudo mv epinio-linux-x86_64 /usr/local/bin/epinio
$ sudo chmod +x /usr/local/bin/epinio

Try running the epinio command:

$ Epinio Version: v1.1.0
Go Version: go1.18.3

Next, you can connect the CLI to the Epinio installation running in your cluster.

Connecting the CLI to Epinio

Login instructions are shown in the Helm output displayed after you install Epinio. The Epinio API server is exposed at epinio.<global.domain>. The default user credentials are admin and password. Run the following command in your terminal to connect your CLI to Epinio, assuming you used 192.168.49.2.sslip.io as your global domain:

$ epinio login -u admin https://epinio.192.168.49.2.sslip.io

You’ll be prompted to trust the fake certificate generated by your Kubernetes ingress controller if you’re using a magic domain without setting up SSL. Press the Y key at the prompt to continue:

Logging in to Epinio in the CLI

You should see a green Login successful message that confirms the CLI is ready to use.

Accessing the Web UI

The Epinio web UI is accessed by visiting your global domain in your browser. The login credentials match the CLI, defaulting to admin and password. You’ll see a browser certificate warning and a prompt to continue when you’re using an untrusted SSL certificate.

Epinio web UI

Once logged in, you can view your deployed applications, interactively create a new one using a form and manage templates for quickly launching new app instances. The UI replicates most of the functionality available in the CLI.

Creating a Simple App

Now you’re ready to start your first Epinio app from a directory containing your source. You don’t have to create a container image or run any external tools.

You can use the following Node.js code if you need something simple to deploy. Save it to a file called index.js inside a new directory. It runs an Express web server that responds to incoming HTTP requests with a simple message:

const express = require('express')
const app = express()
const port = 8080;

app.get('/', (req, res) => {
  res.send('This application is served by Epinio!')
})

app.listen(port, () => {
  console.log(`Epinio application is listening on port ${port}`)
});

Next, use npm to install Express as a dependency in your project:

$ npm install express

The Epinio CLI has a push command that deploys the contents of your working directory to your Kubernetes cluster. The only required argument is a name for your app.

$ epinio push -n epinio-demo

Press the Enter key at the prompt to confirm your deployment. Your terminal will fill with output as Epinio logs what’s happening behind the scenes. It first uploads your source to its internal MinIO object storage server, then acquires the right Paketo Buildpack to create your application’s container image. The final step adds the Kubernetes deployment, service and ingress resources to run the app.

Deploying an application with Epinio

Wait until you see the green App is online message appears in your terminal, and visit the displayed URL in your browser to see your live application:

App is online

If everything has worked correctly, you’ll see This application is served by Epinio! when using the source code provided above.

Application running in Epinio

Managing Deployed Apps

App updates are deployed by repeating the epinio push command:

$ epinio push -n epinio-demo

You can retrieve a list of deployed apps with the Epinio CLI:

$ epinio app list
Namespace: workspace

✔️  Epinio Applications:
|        NAME         |            CREATED            | STATUS |                     ROUTES                     | CONFIGURATIONS | STATUS DETAILS |
|---------------------|-------------------------------|--------|------------------------------------------------|----------------|----------------|
| epinio-demo         | 2022-08-23 19:26:38 +0100 BST | 1/1    | epinio-demo-a279f.192.168.49.2.sslip.io         |                |                |

The app logs command provides access to the logs written by your app’s standard output and error streams:

$ epinio app logs epinio-demo

🚢  Streaming application logs
Namespace: workspace
Application: epinio-demo
🕞  [repinio-demo-057d58004dbf05e7fb7516a0c911017766184db8-6d9fflt2w] repinio-demo-057d58004dbf05e7fb7516a0c911017766184db8 Epinio application is listening on port 8080

Scale your application with more instances using the app update command:

$ epinio app update epinio-demo --instances 3

You can delete an app with app delete. This will completely remove the deployment from your cluster, rendering it inaccessible. Epinio won’t touch the local source code on your machine.

$ epinio app delete epinio-demo

You can perform all these operations within the web UI as well.

Conclusion

Epinio makes application development in Kubernetes simple because you can go from code to a live URL in one step. Running a single command gives you a live deployment that runs in your own Kubernetes cluster. It lets developers launch applications without surmounting the Kubernetes learning curve, while operators can continue using their familiar management tools and processes.

Epinio can be used anywhere you’re working, whether on your own workstation or as a production environment in the cloud. Local setup is quick and easy with zero configuration, letting you concentrate on your code. The platform uses Paketo Buildpacks to discover your source, so it’s language and framework-agnostic.

Epinio is one of the many offerings from SUSE, which provides open source technologies for Linux, cloud computing and containers. Epinio is SUSE’s solution to support developers building apps on Kubernetes, sitting alongside products like Rancher Desktop that simplify Kubernetes cluster setup. Install and try Epinio in under five minutes so you can push app deployments straight from your source.

How to Explain Zero Trust to Your Tech Leadership: Gartner Report

Wednesday, 24 August, 2022

Does it seem like everyone’s talking about Zero Trust? Maybe you know everything there is to know about Zero Trust, especially Zero Trust for container security. But if your Zero Trust initiatives are being met with brick walls or blank stares, maybe you need some help from Gartner®. And they’ve got just the thing to help you explain the value of Zero Trust to your leadership; It’s called Quick Answer: How to Explain Zero Trust to Technology Executives.

So What is Zero Trust?

According to authors Charlie Winckless and Neil MacDonald from Gartner, “Zero Trust is a misnomer; it does not mean ‘no trust’ but zero implicit trust and use of risk-appropriate, explicit trust. To obtain funding and support for Zero Trust initiatives, security and risk management leaders must be able to explain the benefits to their technical executive leaders.”

Explaining Zero Trust to Technology Executives

This Quick Answer starts by introducing the concept of Zero Trust so that you can do the same.  According to the authors, “Zero Trust is a mindset (or paradigm) that defines key security initiatives. A Zero Trust mindset extends beyond networking and can be applied to multiple aspects of enterprise systems. It is not solely purchased as a product or set of products.” Furthermore,

”Zero Trust involves systematically removing implicit trust in IT infrastructures.”

The report also helps you explain the business value of Zero Trust to your leadership. For example, “Zero trust forms a guiding principle for security architectures that improve security posture and increase cyber-resiliency,” write Winckless and MacDonald.

Next Steps to Learn about Zero Trust Container Security

Get this report and learn more about Zero Trust, how it can bring greater security to your container infrastructure and how you can explain the need for Zero Trust to your leadership team.

For even more on Zero Trust, read our new book, Zero Trust Container Security for Dummies.

Cloud Modernization Best Practices

Monday, 8 August, 2022

Cloud services have revolutionized the technical industry, and services and tools of all kinds have been created to help organizations migrate to the cloud and become more scalable in the process. This migration is often referred to as cloud modernization.

To successfully implement cloud modernization, you must adapt your existing processes for future feature releases. This could mean adjusting your continuous integration, continuous delivery (CI/CD) pipeline and its technical implementations, updating or redesigning your release approval process (eg from manual approvals to automated approvals), or making other changes to your software development lifecycle.

In this article, you’ll learn some best practices and tips for successfully modernizing your cloud deployments.

Best practices for cloud modernization

The following are a few best practices that you should consider when modernizing your cloud deployments.

Split your app into microservices where possible

Most existing applications deployed on-premises were developed and deployed with a monolithic architecture in mind. In this context, monolithic architecture means that the application is single-tiered and has no modularity. This makes it hard to bring new versions into a production environment because any change in the code can influence every part of the application. Often, this leads to a lot of additional and, at times, manual testing.

Monolithic applications often do not scale horizontally and can cause various problems, including complex development, tight coupling, slow application starts due to application size, and reduced reliability.

To address the challenges that a monolithic architecture presents, you should consider splitting your monolith into microservices. This means that your application is split into different, loosely coupled services that each serve a single purpose.

All of these services are independent solutions, but they are meant to work together to contribute to a larger system at scale. This increases reliability as one failing service does not take down the whole application with it. Also, you now get the freedom to scale each component of your application without affecting other components. On the development side, since each component is independent, you can split the development of your app among your team and work on multiple components parallelly to ensure faster delivery.

For example, the Lyft engineering team managed to quickly grow from a handful of different services to hundreds of services while keeping their developer productivity up. As part of this process, they included automated acceptance testing as part of their pipeline to production.

Isolate apps away from the underlying infrastructure

Engineers built scripts or pieces of code agnostic to the infrastructure they were deployed on in many older applications and workloads. This means they wrote scripts that referenced specific folders or required predefined libraries to be available in the environment in which the scripts were executed. Often, this was due to required configurations on the hardware infrastructure or the operating system or due to dependency on certain packages that were required by the application.

Most cloud providers refer to this as a shared responsibility model. In this model, the cloud provider or service provider takes responsibility for the parts of the services being used, and the service user takes responsibility for protecting and securing the data for any services or infrastructure they use. The interaction between the services or applications deployed on the infrastructure is well-defined through APIs or integration points. This means that the more you move away from managing and relying on the underlying infrastructure, the easier it becomes for you to replace it later. For instance, if required, you only need to adjust the APIs or integration points that connect your application to the underlying infrastructure.

To isolate your apps, you can containerize them, which bakes your application into a repeatable and reproducible container. To further separate your apps from the underlying infrastructure, you can move toward serverless-first development, which includes a serverless architecture. You will be required to re-architect your existing applications to be able to execute on AWS Lambda or Azure Functions or adopt other serverless technologies or services.

While going serverless is recommended in some cases, such as simple CRUD operations or applications with high scaling demands, it’s not a requirement for successful cloud modernization.

Pay attention to your app security

As you begin to incorporate cloud modernization, you’ll need to ensure that any deliverables you ship to your clients are secure and follow a shift-left process. This process lets you quickly provide feedback to your developers by incorporating security checks and guardrails early in your development lifecycle (eg running static code analysis directly after a commit to a feature branch). And to keep things secure at all times during the development cycle, it’s best to set up continuous runtime checks for your workloads. This will ensure that you actively catch future issues in your infrastructure and workloads.

Quickly delivering features, functionality, or bug fixes to customers gives you and your organization more responsibility in ensuring automated verifications in each stage of the software development lifecycle (SDLC). This means that in each stage of the delivery chain, you will need to ensure that the delivered application and customer experience are secure; otherwise, you could expose your organization to data breaches that can cause reputational risk.

Making your deliverables secure includes ensuring that any personally identifiable information is encrypted in transit and at rest. However, it also requires that you ensure your application does not have open security risks. This can be achieved by running static code analysis tools like SonarQube or Checkmarks.

In this blog post, you can read more about the importance of application security in your cloud modernization journey.

Use infrastructure as code and configuration as code

Infrastructure as code (IaC) is an important part of your cloud modernization journey. For instance, if you want to be able to provision infrastructure (ie required hardware, network and databases) in a repeatable way, using IaC will empower you to apply existing software development practices (such as pull requests and code reviews) to change the infrastructure. Using IaC also helps you to have immutable infrastructure that prevents accidentally introducing risk while making changes to existing infrastructure.

Configuration drift is a prominent issue with making ad hoc changes to an infrastructure. If you make any manual changes to your infrastructure and forget to update the configuration, you might end up with an infrastructure that doesn’t match its own configuration. Using IaC enforces that you make changes to the infrastructure only by updating the configuration code, which helps maintain consistency and a reliable record of changes.

All the major cloud providers have their own definition language for IaC, such as AWS CloudFormationGoogle Cloud Platform (GCP) and Microsoft Azure.

Ensuring that you can deploy and redeploy your application or workload in a repeatable manner will empower your teams further because you can deploy the infrastructure in additional regions or target markets without changing your application. If you don’t want to use any of the major cloud providers’ offerings to avoid vendor lock-in, other IaC alternatives include Terraform and Pulumi. These tools offer capabilities to deploy infrastructure into different cloud providers from a single codebase.

Another way of writing IaC is the AWS Cloud Development Kit (CDK), which has unique capabilities that make it a good choice for writing IaC while driving cultural change within your organization. For instance, AWS CDK lets you write automated unit tests for your IaC. From a cultural perspective, this allows developers to write IaC in their preferred programming language. This means that developers can be part of a DevOps team without needing to learn a new language. AWS CDK can also be used to quickly deploy and develop infrastructure on AWS, cdk8s for Kubernetes, and Cloud Development Kit for Terraform (CDKTF).

After adapting to IaC, it’s also recommended to deploy all your configurations as code (CAC). When you use CoC, you can put the same guardrails (ie pull requests) around configuration changes required for any code change in a production environment.

Pay attention to resource usage

It’s common for new entrants to the cloud to miss out on tracking their resource consumption while they’re in the process of migrating to the cloud. Some organizations start with too much (~20 percent) of additional resources, while some forget to set up restricted access to avoid overuse. This is why tracking the resource usage of your new cloud infrastructure from day one is very important.

There are a couple of things you can do about this. The first and a very high-level solution is to set budget alerts so that you’re notified when your resources start to cost more than they are supposed to in a fixed time period. The next step is to go a level down and set up cost consolidation of each resource being used in the cloud. This will help you understand which resource is responsible for the overuse of your budget.

The final and very effective solution is to track and audit the usage of all resources in your cloud. This will give you a direct answer as to why a certain resource overshot its expected budget and might even point you towards the root cause and probable solutions for the issue.

Culture and process recommendations for cloud modernization

How cloud modernization impacts your organization’s culture and processes often goes unnoticed. If you really want to implement cloud modernization, you need to change every engineer in your organization’s mindset drastically.

Modernize SDLC processes

Oftentimes, organizations with a more traditional, non-cloud delivery model follow a checklist-based approach for their SDLC. During your cloud modernization journey, existing SDLC processes will need to be enhanced to be able to cope with the faster delivery of new application versions to the production environment. Verifications that are manual today will need to be automated to ensure faster response times. In addition, client feedback needs to flow faster through the organization to be quickly incorporated into software deliverables. Different tools, such as SecureStack and SUSE Manager, can help automate and improve efficiency in your SDLC, as they take away the burden of manually managing rules and policies.

Drive cultural change toward blameless conversations

As your cloud journey continues to evolve and you need to deliver new features faster or quickly fix bugs as they arise, this higher change frequency and higher usage of applications will lead to more incidents and cause disruptions. To avoid attrition and arguments within the DevOps team, it’s important to create a culture of blameless communication. Blameless conversations are the foundation of a healthy DevOps culture.

One way you can do this is by running blameless post-mortems. A blameless post-mortem is usually set up after a negative experience within an organization. In the post-mortem, which is usually run as a meeting, everyone explains his or her view on what happened in a non-accusing, objective way. If you facilitate a blameless post-mortem, you need to emphasize that there is no intention of blaming or attacking anyone during the discussion.

Track key performance metrics

Google’s annual State of DevOps report uses four key metrics to measure DevOps performance: deploy frequency, lead time for changes, time to restore service, and change fail rate. While this article doesn’t focus specifically on DevOps, tracking these four metrics is also beneficial for your cloud modernization journey because it allows you to compare yourself with other industry leaders. Any improvement of key performance indicators (KPIs) will motivate your teams and ensure you reach your goals.

One of the key things you can measure is the duration of your modernization project. The project’s duration will directly impact the project’s cost, which is another important metric to pay attention to in your cloud modernization journey.

Ultimately, different companies will prioritize different KPIs depending on their goals. The most important thing is to pick metrics that are meaningful to you. For instance, a software-as-a-service (SaaS) business hosting a rapidly growing consumer website will need to track the time it takes to deliver a new feature (from commit to production). However, this metric isn’t meant for a traditional bank that only updates its software once a year.

You should review your chosen metrics regularly. Are they still in line with your current goals? If not, it’s time to adapt.

Conclusion

Migrating your company to the cloud requires changing the entirety of your applications or workloads. But it doesn’t stop there. In order to effectively implement cloud modernization, you need to adjust your existing operations, software delivery process, and organizational culture.

In this roundup, you learned about some best practices that can help you in your cloud modernization journey. By isolating your applications from the underlying infrastructure, you gain flexibility and the ability to shift your workloads easily between different cloud providers. You also learned how implementing a modern SDLC process can help your organization protect your customer’s data and avoid reputational loss by security breaches.

SUSE supports enterprises of all sizes on their cloud modernization journey through their Premium Technical Advisory Services. If you’re looking to restructure your existing solutions and accelerate your business, SUSE’s cloud native transformation approach can help you avoid common pitfalls and accelerate your business transformation.

Learn more in the SUSE & Rancher Community. We offer free classes on Kubernetes, Rancher, and more to support you on your cloud native learning path.

Managing Your Hyperconverged Network with Harvester

Friday, 22 July, 2022

Hyperconverged infrastructure (HCI) is a data center architecture that uses software to provide a scalable, efficient, cost-effective way to deploy and manage resources. HCI virtualizes and combines storage, computing, and networking into a single system that can be easily scaled up or down as required.

A hyperconverged network, a networking architecture component of the HCI stack, helps simplify network management for your IT infrastructure and reduce costs by virtualizing your network. Network virtualization is the most complicated among the storage, compute and network components because you need to virtualize the physical controllers and switches while dividing the network isolation and bandwidth required by the storage and compute. HCI allows organizations to simplify their IT infrastructure via a single control pane while reducing costs and setup time.

This article will dive deeper into HCI with a new tool from SUSE called Harvester. By using Kubernetes’ Container Network Interface (CNI) mechanisms, Harvester enables you to better manage the network in an HCI. You’ll learn the key features of Harvester and how to use it with your infrastructure.

Why you should use Harvester

The data center market offers plenty of proprietary virtualization platforms, but generally, they aren’t open source and enterprise-grade. Harvester fills that gap. The HCI solution built on Kubernetes has garnered about 2,200 GitHub stars as of this article.

In addition to traditional virtual machines (VMs), Harvester supports containerized environments, bridging the gap between legacy and cloud native IT. Harvester allows enterprises to replicate HCI instances across remote locations while managing these resources through a single pane.

Following are several reasons why Harvester could be ideal for your organization.

Open source solution

Most HCI solutions are proprietary, requiring complicated licenses, high fees and support plans to implement across your data centers. Harvester is a free, open source solution with no license fees or vendor lock-in, and it supports environments ranging from core to edge infrastructure. You can also submit a feature request or issue on the GitHub repository. Engineers check the recommendations, unlike other proprietary software that updates too slowly for market demands and only offers support for existing versions.

There is an active community that helps you adopt Harvester and offers to troubleshoot. If needed, you can buy a support plan to receive round-the-clock assistance from support engineers at SUSE.

Rancher integration

Rancher is an open source platform from SUSE that allows organizations to run containers in clusters while simplifying operations and providing security features. Harvester and Rancher, developed by the same engineering team, work together to manage VMs and Kubernetes clusters across environments in a single pane.

Importing an existing Harvester installation is as easy as clicking a few buttons on the Rancher virtualization management page. The tight integration enables you to use authentication and role-based access control for multitenancy support across Rancher and Harvester.

This integration also allows for multicluster management and load balancing of persistent storage resources in both VM and container environments. You can deploy workloads to existing VMs and containers on edge environments to take advantage of edge processing and data analytics.

Lightweight architecture

Harvester was built with the ethos and design principles of the Cloud Native Computing Foundation (CNCF), so it’s lightweight with a small footprint. Despite that, it’s powerful enough to orchestrate VMs and support edge and core use cases.

The three main components of Harvester are:

  • Kubernetes: Used as the Harvester base to produce an enterprise-grade HCI.
  • Longhorn: Provides distributed block storage for your HCI needs.
  • KubeVirt: Provides a VM management kit on top of Kubernetes for your virtualization needs.

The best part is that you don’t need experience in these technologies to use Harvester.

What Harvester offers

As an HCI solution, Harvester is powerful and easy to use, with a web-based dashboard for managing your infrastructure. It offers a comprehensive set of features, including the following:

VM lifecycle management

If you’re creating Windows or Linux VMs on the host, Harvester supports cloud-init, which allows you to assign a startup script to a VM instance that runs when the VM boots up.

The custom cloud-init startup scripts can contain custom user data or network configuration and are inserted into a VM instance using a temporary disc. Using the QEMU guest agent means you can dynamically inject SSH keys through the dashboard to your VM via cloud-init.

Destroying and creating a VM is a click away with a clearly defined UI.

VM live migration support

VMs inside Harvester are created on hosts or bare-metal infrastructure. One of the essential tasks in any infrastructure is reducing downtime and increasing availability. Harvester offers a high-availability solution with VM live migration.

If you want to move your VM to Host 1 while maintaining Host 2, you only need to click migrate. After the migration, your memory pages and disc block are transferred to the new host.

Supported VM backup and restore

Backing up a VM allows you to restore it to a previous state if something goes wrong. This backup is crucial if you’re running a business or other critical application on the machine; otherwise, you could lose data or necessary workflow time if the machine goes down.

Harvester allows you to easily back up your machines in Amazon Simple Storage Service (Amazon S3) or network-attached storage (NAS) devices. After configuring your backup target, click Take Backup on the virtual machine page. You can use the backup to replace or restore a failed VM or create a new machine on a different cluster.

Network interface controllers

Harvester offers a CNI plug-in to connect network providers and configuration management networks. There are two network interface controllers available, and you can choose either or both, depending on your needs.

Management network

This is the default networking method for a VM, using the eth0 interface. The network configures using Canal CNI plug-ins. A VM using this network changes IP after a reboot while only allowing access within the cluster nodes because there’s no DHCP server.

Secondary network

The secondary network controller uses the Multus and bridge CNI plug-ins to implement its customized Layer 2 bridge VLAN. VMs are connected to the host network via a Linux bridge and are assigned IPv4 addresses.

IPv4 addresses’ VMs are accessed from internal and external networks using the physical switch.

When to use Harvester

There are multiple use cases for Harvester. The following are some examples:

Host management

Harvester dashboards support viewing infrastructure nodes from the host page. Kubernetes has HCI built-in, which makes live migrations, like Features, possible. And Kubernetes provides fault tolerance to keep your workloads in other nodes running if one node goes down.

VM management

Harvester offers flexible VM management, with the ability to create Windows or Linux VMs easily and quickly. You can mount volumes to your VM if needed and switch between the administration and a secondary network, according to your strategy.

As noted above, live migration, backups, and cloud-init help manage VM infrastructure.

Monitoring

Harvester has built-in monitoring integration with Prometheus and Grafana, which installs automatically during setup. You can observe CPU, memory, storage metrics, and more detailed metrics, such as CPU utilization, load average, network I/O, and traffic. The metrics included are host level and specific VM level.

These stats help ensure your cluster is healthy and provide valuable details when troubleshooting your hosts or machines. You can also pop out the Grafana dashboard for more detailed metrics.

Conclusion

Harvester is the HCI solution you need to manage and improve your hyperconverged infrastructure. The open source tool provides storage, network and computes in a single pane that’s scalable, reliable, and easy to use.

Harvester is the latest innovation brought to you by SUSE. This open source leader provides enterprise Linux solutions, such as Rancher and K3s, designed to help organizations more easily achieve digital transformation.

Get started

For more on Harvester or to get started, check the official documentation.

A Path to Legacy Application Modernization Through Kubernetes

Wednesday, 6 July, 2022

These legacy applications may have multiple services bundled into the same deployment unit without a logical grouping. They’re challenging to maintain since changes to one part of the application require changing other tightly coupled parts, making it harder to add or modify features. Scaling such applications is also tricky because to do so requires adding more hardware instances connected to load balancers. This takes a lot of manual effort and is prone to errors.

Modernizing a legacy application requires you to visualize the architecture from a brand-new perspective, redesigning it to support horizontal scaling, high availability and code maintainability. This article explains how to modernize legacy applications using Kubernetes as the foundation and suggests three tools to make the process easier.

Using Kubernetes to modernize legacy applications

A legacy application can only meet a modern-day application’s scalability and availability requirements if it’s redesigned as a collection of lightweight, independent services.

Another critical part of modern application architecture is the infrastructure. Adding more server resources to scale individual services can lead to a large overhead that you can’t automate, which is where containers can help. Containers are self-contained, lightweight packages that include everything needed for a service to run. Combine this with a cluster of hardware instances, and you have an infrastructure platform where you can deploy and scale the application runtime environment independently.

Kubernetes can create a scalable and highly available infrastructure platform using container clusters. Moving legacy applications from physical or virtual machines to Kubernetes-hosted containers offers many advantages, including the flexibility to use on-premises and multi-cloud environments, automated container scheduling and load balancing, self-healing capability, and easy scalability.

Organizations generally adopt one of two approaches to deploy legacy applications on Kubernetes: using virtual machines and redesigning the application.

Using virtual machines

A monolith application’s code and dependencies are embedded in a virtual machine (VM) so that images of the VM can run on Kubernetes. Frameworks like Rancher provide a one-click solution to run applications this way. The disadvantage is that the monolith remains unchanged, which doesn’t achieve the fundamental principle of using lightweight container images. It is also possible to run part of the application in VMs and containerize the less complex ones. This hybrid approach helps to break down the monolith to a smaller extent without huge effort in refactoring the application. Tools like Harvester can help while managing the integration in this hybrid approach.

Redesigning the application

Redesigning a monolithic application to support container-based deployment is a challenging task that involves separating the application’s modules and recreating them as stateless and stateful services. Containers, by nature, are stateless and require additional mechanisms to handle the storage of state information. It’s common to use the distributed storage of the container orchestration cluster or third-party services for such persistence.

Organizations are more likely to adopt the first approach when the legacy application needs to move to a Kubernetes-based solution as soon as possible. This way, they can have a Kubernetes-based solution running quickly with less business impact and then slowly move to a completely redesigned application. Although Kubernetes migration has its challenges, some tools can simplify this process. The following are three such solutions.

Rancher

Rancher provides a complete container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere. It’s designed to simplify the operational challenges of running multiple Kubernetes clusters across different infrastructure environments. Rancher provides developers with a complete Kubernetes environment, irrespective of the backend, including centralized authentication, access control and observability features:

  • Unified UI: Most organizations have multiple Kubernetes clusters. DevOps engineers can sometimes face challenges when manually provisioning, managing, monitoring and securing thousands of cluster nodes while establishing compliance. Rancher lets engineers manage all these clusters from a single dashboard.
  • Multi-environment deployment: Rancher helps you create Kubernetes clusters across multiple infrastructure environments like on-premises data centers, public clouds and edge locations without needing to know the nuances of each environment.
  • App catalog: The Rancher app catalog offers different application templates. You can easily roll out complex application stacks on top of Kubernetes with the click of a button. One example is Longhorn, a distributed storage mechanism to help store state information.
  • Security policies and role-based access control: Rancher provides a centralized authentication mechanism and role-based access control (RBAC) for all managed clusters. You can also create pod-level security policies.
  • Monitoring and alerts: Rancher offers cluster monitoring facilities and the ability to generate alerts based on specific conditions. It can help transport Kubernetes logs to external aggregators.

Harvester

Harvester is an open source, hyperconverged infrastructure solution. It combines KubeVirt, a virtual machine add-on, and Longhorn, a cloud native, distributed block storage add-on along with many other cloud native open source frameworks. Additionally, Harvester is built on Kubernetes itself.

Harvester offers the following benefits to your Kubernetes cluster:

  • Support for VM workloads: Harvester enables you to run VM workloads on Kubernetes. Running monolithic applications this way helps you quickly migrate your legacy applications without the need for complex cluster configurations.
  • Cost-effective storage: Harvester uses directly connected storage drives instead of external SANs or cloud-based block storage. This helps significantly reduce costs.
  • Monitoring features: Harvester comes with Prometheus, an open source monitoring solution supporting time series data. Additionally, Grafana, an interactive visualization platform, is a built-in integration of Harvester. This means that users can see VM or Kubernetes cluster metrics from the Harvester UI.
  • Rancher integration: Harvester comes integrated with Rancher by default, so you can manage multiple Harvester clusters from the Rancher management UI. It also integrates with Rancher’s centralized authentication and RBAC.

Longhorn

Longhorn is a distributed cloud storage solution for Kubernetes. It’s an open source, cloud native project originally developed by Rancher Labs, and it integrates with the Kubernetes persistent volume API. It helps organizations use a low-cost persistent storage mechanism for saving container state information without relying on cloud-based object storage or expensive storage arrays. Since it’s deployed on Kubernetes, Longhorn can be used with any storage infrastructure.

Longhorn offers the following advantages:

  • High availability: Longhorn’s microservice-based architecture and lightweight nature make it a highly available service. Its storage engine only needs to manage a single volume, dramatically simplifying the design of storage controllers. If there’s a crash, only the volume served by that engine is affected. The Longhorn engine is lightweight enough to support as many as 10,000 instances.
  • Incremental snapshots and backups: Longhorn’s UI allows engineers to create scheduled jobs for automatic snapshots and backups. It’s possible to execute these jobs even when a volume is detached. There’s also an adequate provision to prevent existing data from being overwritten by new data.
  • Ease of use: Longhorn comes with an intuitive dashboard that provides information about volume status, available storage and node status. The UI also helps configure nodes, set up backups and change operational settings.
  • Ease of deployment: Setting up and deploying Longhorn just requires a single click from the Rancher marketplace. It’s a simple process, even from the command-line interface, because it involves running only certain commands. Longhorn’s implementation is based on the container storage interface (CSI) as a CSI plug-in.
  • Disaster recovery: Longhorn supports creating disaster recovery (DR) volumes in separate Kubernetes clusters. When the primary cluster fails, it can fail over to the DR volume. Engineers can configure recovery time and point objectives when setting up that volume.
  • Security: Longhorn supports data encryption at rest and in motion. It uses Kubernetes secret storage for storing the encryption keys. By default, backups of encrypted volumes are also encrypted.
  • Cost-effectiveness: Being open source and easily maintainable, Longhorn provides a cost-effective alternative to the cloud or other proprietary services.

Conclusion

Modernizing legacy applications often involves converting them to containerized microservice-based architecture. Kubernetes provides an excellent solution for such scenarios, with its highly scalable and available container clusters.

The journey to Kubernetes-hosted, microservice-based architecture has its challenges. As you saw in this article, solutions are available to make this journey simpler.

SUSE is a pioneer in value-added tools for the Kubernetes ecosystem. SUSE Rancher is a powerful Kubernetes cluster management solution. Longhorn provides a storage add-on for Kubernetes and Harvester is the next generation of open source hyperconverged infrastructure solutions designed for modern cloud native environments.

Customizing your Application with Epinio

Thursday, 26 May, 2022

One of the best things about Kubernetes is just how absurdly flexible it is.

You, as an admin, can shape what gets deployed into what is the best for your business. Whether this is a basic webapp with just a deployment, service and ingress; or if you need all sorts of features with sidecars and network policies wrapping the serverless service-mesh platform of the day. The power is there.

This has long been one of the weaknesses of the PaaS style approach since you tend to be locked into what the platform builder’s opinions are at the time.

With some of the features in recent releases of Epinio, we’ve found a middle ground!

You can now give your developers both the ease of use, short learning curve and speed that they want while also getting the ability to shape what applications should look like in your environments!

This takes the shape of two features: custom Application Template(s), and a Service Marketplace. Let’s dive into what these are and how to use them.

Application Templates in Epinio

The way a deployment works with Epinio is that the developer pushes their code to the platform where it’s cached in an object store, built using buildpacks, pushed to a registry, generates a values.yaml, then deployed using helm to the local cluster.

In past releases, all of this was configurable except the deployment itself. You could use your own object storage, buildpacks and registry but were locked into a basic application that was just a deployment, service and ingress.

We’ve introduced a new custom resource in Epinio that allows for a platform admin to set up a list of application templates that can be selected by the application developer for their application!

With this feature, you could offer a few different styles of applications to your developers to choose from while still keeping the developer’s life easier as well as allowing for governance of what gets deployed. For example, you could have a secure chart, a chart with an open-telemetry sidecar, a chart that deploys Akri configurations, a chart that deploys to Knative or whatever you can dream of!

So how do I set up an application template in Epinio?

Good question, I’m glad you asked (big grin)

The first thing you need is a helm chart! (Feel free to copy from our default template found here: https://github.com/epinio/helm-charts/blob/main/chart/application)

Since Epinio generates the values.yaml during the push, your chart will be deployed with values that look similar to:

epinio:
  tlsIssuer: epinio-ca
  appName: placeholder
  replicaCount: 1
  stageID: 999
  imageURL: myregistry.local/apps/epinio-app
  username: user
  routes:
  - domain: epinio-app.local
    id: epinio-app.local
    path: /
  env:
  - name: env-name
    value: env-value
  configurations:
  - config-name
  start: null

Note: The ability for developers to pass in additional values is being added soon: https://github.com/epinio/epinio/issues/1252

Once you have all your customizations done, build it into a tgz file using helm package and host it somewhere accessible from the cluster (potentially in the cluster itself using Epinio?).

With the chart published, you can now expose it to your developers by adding this CRD to the namespace that Epinio is installed in (“epinio” if you followed the docs):

apiVersion: application.epinio.io/v1
kind: AppChart
metadata:
  name: my-custom-chart
  namespace: epinio
spec:
  description: Epinio chart with everything I need
  helmChart: https://example.com/file.tgz
  shortDescription: Custom Application Template with Epinio

With this done, the developer can view what templates can be used using:

epinio app chart list

Then when they push their code, they can pick the chart they want with:

epinio push --name myapp --app-chart my-custom-chart

Service Marketplace in Epinio

The other huge piece of the puzzle is the service marketplace functionality. We went around in circles for a bit on what services to enable or which set of operators to support. Instead, we decided that Helm was a good way to give choice.

Similar to the Application Templates, the Service Marketplace offerings are controlled via CRDs in the epinio namespace so you can easily add your own.

By default, we include charts for Redis, Postgres, MySQL and RabbitMQ for use in developer environments.

To add a helm chart into the marketplace, create a new Epinio Service object that looks like:

apiVersion: application.epinio.io/v1
kind: Service
metadata:
  name: my-new-service
  namespace: epinio
spec:
  appVersion: 0.0.1
  chart: custom-service
  chartVersion: 0.0.1
  description: |
    This is a custom chart to demo services.
  helmRepo:
    name: custom-chart
    url: https://charts.bitnami.com/bitnami
  name: custom-chart
  shortDescription: A custom service 
  values: |-
    exampleValue: {}

Since we are using helm, you can put any Kubernetes object in the marketplace. This is super helpful as it means that we can seamlessly tie into other operator-based solutions (such as Crossplane or KubeDB but I’ll leave that as an exercise for the reader).

We are working toward a 1.0.0 release that includes a better onboarding flow, improved security, bringing the UI to parity with the CLI and improving documentation of common workflows. You can track our progress in the milestone: https://github.com/epinio/epinio/milestone/6

Give Epinio a try at https://epinio.io/

Deploying K3s with Ansible

Monday, 16 May, 2022

There are many different ways to run a Kubernetes cluster, from setting everything up manually to using a lightweight distribution like K3s. K3s is a Kubernetes distribution built for IoT and edge computing and is excellent for running on low-powered devices like Raspberry Pis. However, you aren’t limited to running it on low-powered hardware; it can be used for anything from a Homelab up to a Production cluster. Installing and configuring multinode clusters can be tedious, though, which is where Ansible comes in.

Ansible is an IT automation platform that allows you to utilize “playbooks” to manage the state of remote machines. It’s commonly used for managing configurations, deployments, and general automation across fleets of servers.

In this article, you will see how to set up some virtual machines (VMs) and then use Ansible to install and configure a multinode K3s cluster on these VMs.

What exactly is Ansible?

Essentially, Ansible allows you to configure tasks that tell it what the system’s desired state should be; then Ansible will leverage modules that tell it how to shift the system toward that desired state. For example, the following instruction uses the ansible.builtin.file module to tell Ansible that /etc/some_directory should be a directory:

- name: Create a directory if it does not exist
  ansible.builtin.file:
    path: /etc/some_directory
    state: directory
    mode: '0755'

If this is already the system’s state (i.e., the directory exists), this task is skipped. If the system’s state does not match this described state, the module contains logic that allows Ansible to rectify this difference (in this case, by creating the directory).

Another key benefit of Ansible is that it carries out all of these operations via the Secure Shell Protocol (SSH), meaning you don’t need to install agent software on the remote targets. The only special software required is Ansible, running on one central device that manipulates the remote targets. If you wish to learn more about Ansible, the official documentation is quite extensive.

Deploying a K3s cluster with Ansible

Let’s get started with the tutorial! Before we jump in, there are a few prerequisites you’ll need to install or set up:

  • A hypervisor—software used to run VMs. If you do not have a preferred hypervisor, the following are solid choices:
    • Hyper-V is included in some Windows 10 and 11 installations and offers a great user experience.
    • VirtualBox is a good basic cross-platform choice.
    • Proxmox VE is an open source data center-grade virtualization platform.
  • Ansible is an automation platform from Red Hat and the tool you will use to automate the K3s deployment.
  • A text editor of choice
    • VS Code is a good option if you don’t already have a preference.

Deploying node VMs

To truly appreciate the power of Ansible, it is best to see it in action with multiple nodes. You will need to create some virtual machines (VMs) running Ubuntu Server to do this. You can get the Ubuntu Server 20.04 ISO from the official site. If you are unsure which option is best for you, pick option 2 for a manual download.

Download Ubuntu image

You will be able to use this ISO for all of your node VMs. Once the download is complete, provision some VMs using your hypervisor of choice. You will need at least two or three to get the full effect. The primary goal of using multiple VMs is to see how you can deploy different configurations to machines depending on the role you intend for them to fill. To this end, one “primary” node and one or two “replica” nodes will be more than adequate.

If you are not familiar with hypervisors and how to deploy VMs, know that the process varies from tool to tool, but the overall workflow is often quite similar. Below you can find some official resources for the popular hypervisors mentioned above:

In terms of resource allocation for each VM, it will vary depending on the resources you have available on your host machine. Generally, for an exercise like this, the following specifications will be adequate:

  • CPU: one or two cores
  • RAM: 1GB or 2GB
  • HDD: 10GB

This tutorial will show you the VM creation process using VirtualBox since it is free and cross-platform. However, feel free to use whichever hypervisor you are most comfortable with—once the VMs are set up and online, the choice of hypervisor does not matter any further.

After installing VirtualBox, you’ll be presented with a welcome screen. To create a new VM, click New in the top right of the toolbar:

VirtualBox welcome screen

Doing so will open a new window that will prompt you to start the VM creation process by naming your VM. Name the first VM “k3s-primary”, and set its type as Linux and its version as Ubuntu (64-bit). Next, you will be prompted to allocate memory to the VM. Bear in mind that you will need to run two or three VMs, so the amount you can give will largely depend on your host machine’s specifications. If you can afford to allocate 1GB or 2GB of RAM per VM, that will be sufficient.

After you allocate memory, VirtualBox will prompt you to configure the virtual hard disk. You can generally click next and continue through each of these screens, leaving the defaults as they are. You may wish to change the size of the virtual hard disk. A memory of 10GB should be enough—if VirtualBox tries to allocate more than this, you can safely reduce it to 10GB. Once you have navigated through all of these steps and created your VM, select your new VM from the list and click on Settings. Navigate to the Network tab and change the Attached to value to Bridged Adapter. Doing this ensures that your VM will have internet access and be accessible on your local network, which is important for Ansible to work correctly. After changing this setting, click OK to save it.

VM network settings

Once you are back on the main screen, select your VM and click Start. You will be prompted to select a start-up disk. Click on the folder icon next to the Empty selection:

Empty start-up disk

This will take you to the Optical Disk Selector. Click Add, and then navigate to the Ubuntu ISO file you downloaded and select it. Once it is selected, click Choose to confirm it:

Select optical disk

Next, click Start on the start-up disk dialog, and the VM should boot, taking you into the Ubuntu installation process. This process is relatively straightforward, and you can accept the defaults for most things. When you reach the Profile setup screen, make sure you do the following:

  • Give all the servers the same username, such as “ubuntu”, and the same password. This is important to make sure the Ansible playbook runs smoothly later.
  • Make sure that each server has a different name. If more than one machine has the same name, it will cause problems later. Suggested names are as follows:
    • k3s-primary
    • k3s-replica-1
    • k3s-replica-2

Ubuntu profile setup

The next screen is also important. The Ubuntu Server installation process lets you import SSH public keys from a GitHub profile, allowing you to connect via SSH to your newly created VM with your existing SSH key. To take advantage of this, make sure you add an SSH key to GitHub before completing this step. You can find instructions for doing so here. This is highly recommended, as although Ansible can connect to your VMs via SSH using a password, doing so requires extra configuration not covered in this tutorial. It is also generally good to use SSH keys rather than passwords for security reasons.

Ubuntu SSH setup

After this, there are a few more screens, and then the installer will finally download some updates before prompting you to reboot. Once you reboot your VM, it can be considered ready for the next part of the tutorial.

However, note that you now need to repeat these steps one or two more times to create your replica nodes. Repeat the steps above to create these VMs and install Ubuntu Server on them.

Once you have all of your VMs created and set up, you can start automating your K3s installation with Ansible.

Installing K3s with Ansible

The easiest way to get started with K3s and Ansible is with the official playbook created by the K3s.io team. To begin, open your terminal and make a new directory to work in. Next, run the following command to clone the k3s-ansible playbook:

git clone https://github.com/k3s-io/k3s-ansible

This will create a new directory named k3s-ansible that will, in turn, contain some other files and directories. One of these directories is the inventory/ directory, which contains a sample that you can clone and modify to let Ansible know about your VMs. To do this, run the following command from within the k3s-ansible/ directory:

cp -R inventory/sample inventory/my-cluster

Next, you will need to edit inventory/my-cluster/hosts.ini to reflect the details of your node VMs correctly. Open this file and edit it so that the contents are as follows (where placeholders surrounded by angled brackets <> need to be substituted for an appropriate value):

[master]
<k3s-primary ip address>

[node]
<k3s-replica-1 ip address>
<k3s-replica-2 ip address (if you made this VM)>

[k3s_cluster:children]
master
node

You will also need to edit inventory/my-cluster/group_vars/all.yml. Specifically, the ansible_user value needs to be updated to reflect the username you set up for your VMs previously (ubuntu, if you are following along with the tutorial). After this change, the file should look something like this:

---
k3s_version: v1.22.3+k3s1
ansible_user: ubuntu
systemd_dir: /etc/systemd/system
master_ip: "{{ hostvars[groups['master'][0]]['ansible_host'] | default(groups['master'][0]) }}"
extra_server_args: ''
extra_agent_args: ''

Now you are almost ready to run the playbook, but there is one more thing to be aware of. Ubuntu asked if you wanted to import SSH keys from GitHub during the VM installation process. If you did this, you should be able to SSH into the node VMs using the SSH key present on the device you are working on. Still, it is likely that each time you do so, you will be prompted for your SSH key passphrase, which can be pretty disruptive while running a playbook against multiple remote machines. To see this in action, run the following command:

ssh ubuntu@<k3s-primary ip address>

You will likely get a message like Enter passphrase for key '/Users/<username>/.ssh/id_rsa':, which will occur every time you use this key, including when running Ansible. To avoid this prompt, you can run ssh-add, which will ask you for your password and add this identity to your authentication agent. This means that Ansible won’t need to prompt you for your password multiple times. If you are not comfortable leaving this identity in the authentication agent, you can run ssh-add -D after you are done with the tutorial to remove it again.

Once you have added your SSH key’s passphrase, you can run the following command from the k3s-ansible/ directory to run the playbook:

ansible-playbook site.yml -i inventory/my-cluster/hosts.ini -K

Note that the -K flag here will cause Ansible to prompt you for the become password, which is the password of the ubuntu user on the VM. This will be used so that Ansible can execute commands as sudo when needed.

After running the above command, Ansible will now play through the tasks it needs to run to set up your cluster. When it is done, you should see some output like this:

playbook completed

If you see this output, you should be able to SSH into your k3s-primary VM and verify that the nodes are correctly registered. To do this, first run ssh ubuntu@<k3s-primary ip address>. Then, once you are connected, run the following commands:

sudo kubectl version

This should show you the version of both the kubectl client and the underlying Kubernetes server. If you see these version numbers, it is a good sign, as it shows that the client can communicate with the API:

Kubectl version

Next, run the following command to see all the nodes in your cluster:

sudo kubectl get nodes

If all is well, you should see all of your VMs represented in this output:

Kubectl get nodes

Finally, to run a simple workload on your new cluster, you can run the following command:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/simple-pod.yaml

This will create a new simple pod on your cluster. You can then inspect this newly created pod to see which node it is running on, like so:

sudo kubectl get pods -o wide

Specifying the output format with -o wide ensures that you will see some additional information, such as which node it is running on:

Kubectl get pods

You may have noticed that the kubectl commands above are prefixed with sudo. This isn’t usually necessary, but when following the K3s.io installation instructions, you can often run into a scenario where sudo is required. If you prefer to avoid using sudo to run your kubectl commands, there is a good resource here on how to get around this issue.

In summary

In this tutorial, you’ve seen how to set up multiple virtual machines and then configure them into a single Kubernetes cluster using K3s and Ansible via the official K3s.io playbook. Ansible is a powerful IT automation tool that can save you a lot of time when it comes to provisioning and setting up infrastructure, and K3s is the perfect use case to demonstrate this, as manually configuring a multinode cluster can be pretty time-consuming. K3s is just one of the offerings from the team at SUSE, who specialize in business-critical Linux applications, enterprise container management, and solutions for edge computing.

Get started with K3s

Take K3s for a spin!

Stupid Simple Service Mesh: What, When, Why

Monday, 18 April, 2022

Recently microservices-based applications became very popular and with the rise of microservices, the concept of Service Mesh also became a very hot topic. Unfortunately, there are only a few articles about this concept and most of them are hard to digest.

In this blog, we will try to demystify the concept of Service Mesh using “Stupid Simple” explanations, diagrams, and examples to make this concept more transparent and accessible for everyone. In the first article, we will talk about the basic building blocks of a Service Mesh and we will implement a sample application to have a practical example of each theoretical concept. In the next articles, based on this sample app, we will touch more advanced topics, like Service Mesh in Kubernetes, and we will talk about some more advanced Service Mesh implementations like IstioLinkerd, etc.

To understand the concept of Service Mesh, the first step is to understand what problems it solves and how it solves them.

Software architecture has evolved a lot in a short time, from classical monolithic architecture to microservices. Although many praise microservice architecture as the holy grail of software development, it introduces some serious challenges.

Overview of the sample application

For one, a microservices-based architecture means that we have a distributed system. Every distributed system has challenges such as transparencysecurityscalabilitytroubleshooting, and identifying the root cause of issues. In a monolithic system, we can find the root cause of a failure by tracing. But in a microservice-based system, each service can be written in different languages, so tracing is no trivial task. Another challenge is service-to-service communication. Instead of focusing on business logic, developers need to take care of service discoveryhandle connection errorsdetect latencyretry logic, etc. Applying SOLID principles on the architecture level means that these kinds of network problems should be abstracted away and not mixed with the business logic. This is why we need Service Mesh.

Ingress Controller vs. API Gateway vs. Service Mesh

As I mentioned above, we need to apply SOLID principles on an architectural level. For this, it is important to set the boundaries between Ingress Controller, API Gateway, and Service Mesh and understand each one’s role and responsibility.

On a stupid simple and oversimplified level, these are the responsibilities of each concept:

  1. Ingress Controller: allows a single IP port to access all services from the cluster, so its main responsibilities are path mapping, routing and simple load balancing, like a reverse proxy
  2. API Gatewayaggregates and abstracts away APIs; other responsibilities are rate-limiting, authentication, and security, tracing, etc. In a microservices-based application, you need a way to distribute the requests to different services, gather the responses from multiple/all microservices, and then prepare the final response to be sent to the caller. This is what an API Gateway is meant to do. It is responsible for client-to-service communication, north-south traffic.
  3. Service Mesh: responsible for service-to-service communication, east-west traffic. We’ll dig more into the concept of Service Mesh in the next section.

Service Mesh and API Gateway have overlapping functionalities, such as rate-limiting, security, service discovery, tracing, etc. but they work on different levels and solve different problems. Service Mesh is responsible for the flow of requests between services. API Gateway is responsible for the flow of requests between the client and the services, aggregating multiple services and creating and sending the final response to the client.

The main responsibility of an API gateway is to accept traffic from outside your network and distribute it internally, while the main responsibility of a service mesh is to route and manage traffic within your network. They are complementary concepts and a well-defined microservices-based system should combine them to ensure application uptime and resiliency while ensuring that your applications are easily consumable.

What Does a Service Mesh Solve?

As an oversimplified and stupid simple definition, a Service Mesh is an abstraction layer hiding away and separating networking-related logic from business logic. This way developers can focus only on implementing business logic. We implement this abstraction using a proxy, which sits in the front of the service. It takes care of all the network-related problems. This allows the service to focus on what is really important: the business logic. In a microservice-based architecture, we have multiple services and each service has a proxy. Together, these proxies are called Service Mesh.

As best practices suggest, proxy and service should be in separate containers, so each container has a single responsibility. In the world of Kubernetes, the container of the proxy is implemented as a sidecar. This means that each service has a sidecar containing the proxy. A single Pod will contain two containers: the service and the sidecar. Another implementation is to use one proxy for multiple pods. In this case, the proxy can be implemented as a Deamonset. The most common solution is using sidecars. Personally, I prefer sidecars over Deamonsets, because they keep the logic of the proxy as simple as possible.

There are multiple Service Mesh solutions, including IstioLinkerdConsulKong, and Cilium. (We will talk about these solutions in a later article.) Let’s focus on the basics and understand the concept of Service Mesh, starting with Envoy. This is a high-performance proxy and not a complete framework or solution for Service Meshes (in this tutorial, we will build our own Service Mesh solution). Some of the Service Mesh solutions use Envoy in the background (like Istio), so before starting with these higher-level solutions, it’s a good idea to understand the low-level functioning.

Understanding Envoy

Ingress and Egress

Simple definitions:

  • Any traffic sent to the server (service) is called ingress.
  • Any traffic sent from the server (service) is called egress.

The Ingress and the Egress rules should be added to the configuration of the Envoy proxy, so the sidecar will take care of these. This means that any traffic to the service will first go to the Envoy sidecar. Then the Envoy proxy redirects the traffic to the real service. Vice-versa, any traffic from this service will go to the Envoy proxy first and Envoy resolves the destination service using Service Discovery. By intercepting the inbound and outbound traffic, Envoy can implement service discovery, circuit breaker, rate limiting, etc.

The Structure of an Envoy Proxy Configuration File

Every Envoy configuration file has the following components:

  1. Listeners: where we configure the IP and the Port number that the Envoy proxy listens to
  2. Routes: the received request will be routed to a cluster based on rules. For example, we can have path matching rules and prefix rewrite rules to select the service that should handle a request for a specific path/subdomain. Actually, the route is just another type of filter, which is mandatory. Otherwise, the proxy doesn’t know where to route our request.
  3. Filters: Filters can be chained and are used to enforce different rules, such as rate-limiting, route mutation, manipulation of the requests, etc.
  4. Clusters: act as a manager for a group of logically similar services (the cluster has similar responsibility as a service in Kubernetes; it defines the way a service can be accessed), and acts as a load balancer between the services.
  5. Service/Host: the concrete service that handles and responds to the request

Here is an example of an Envoy configuration file:

---
admin:  
  access_log_path: "/tmp/admin_access.log"  
  address:     
    socket_address: 
      address: "127.0.0.1" 
      port_value: 9901
static_resources:   
  listeners:    
   -       
      name: "http_listener"      
      address:         
        socket_address:           
          address: "0.0.0.0"          
          port_value: 80      
      filter_chains:          
        filters:             
          -               
            name: "envoy.http_connection_manager"              
            config:                
              stat_prefix: "ingress"                
              codec_type: "AUTO"               
              generate_request_id: true                
              route_config:                   
                name: "local_route"                  
                virtual_hosts:                    
                  -                       
                    name: "http-route"                      
                    domains:                         
                      - "*"                      
                    routes:                       
                      -                           
                        match:                             
                          prefix: "/nestjs"                          
                        route:                            
                          prefix_rewrite: "/"                            
                          cluster: "nestjs"                        
                      -                           
                        match:                             
                            prefix: "/nodejs"                          
                          route:                            
                            prefix_rewrite: "/"                            
                            cluster: "nodejs"                         
                       -                           
                         match:                             
                           path: "/"                          
                         route:                            
                           cluster: "base"                
              http_filters:                  
                -                     
                  name: "envoy.router"                    
                  config: {}  

  clusters:    
    -       
      name: "base"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -           
          socket_address:             
            address: "service_1_envoy"            
            port_value: 8786        
        -           
          socket_address:             
            address: "service_2_envoy"            
            port_value: 8789        
    -      
      name: "nodejs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_4_envoy"            
            port_value: 8792        
    -      
      name: "nestjs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_5_envoy"            
            port_value: 8793

The configuration file above translates into the following diagram:

This diagram did not include all configuration files for all the services, but it is enough to understand the basics. You can find this code in my Stupid Simple Service Mesh repository.

As you can see, between lines 10-15 we defined the Listener for our Envoy proxy. Because we are working in Docker, the host is 0.0.0.0.

After configuring the listener, between lines 15-52 we define the Filters. For simplicity we used only the basic filters, to match the routes and to rewrite the target routes. In this case, if the subdomain is “host:port/nodeJs,” the router will choose the nodejs cluster and the URL will be rewritten to “host:port/” (this way the request for the concrete service won’t contain the /nodesJs part). The logic is the same also in the case of “host:port/nestJs”. If we don’t have a subdomain in the request, then the request will be routed to the cluster called base without prefix rewrite filter.

Between lines 53-89 we defined the clusters. The base cluster will have two services and the chosen load balancing strategy is round-robin. Other available strategies can be found here. The other two clusters (nodejs and nestjs) are simple, with only a single service.

The complete code for this tutorial can be found in my Stupid Simple Service Mesh git repository.

Conclusion

In this article, we learned about the basic concepts of Service Mesh. In the first part, we understood the responsibilities and differences between the Ingress Controller, API Gateway, and Service Mesh. Then we talked about what Service Mesh is and what problems it solves. In the second part, we introduced Envoy, a performant and popular proxy, which we used to build our Service Mesh example. We learned about the different parts of the Envoy configuration files and created a Service Mesh with five example services and a front-facing edge proxy.

In the next article, we will look at how to use Service Mesh with Kubernetes and will create an example project that can be used as a starting point in any project using microservices.

There is another ongoing “Stupid Simple AI” series. The first two articles can be found here: SVM and Kernel SVM and KNN in Python.

Want to Learn More from our Stupid Simple Series?

Read our eBook: Stupid Simple Kubernetes. Download it here!