Stupid Simple Kubernetes: Service Mesh

Wednesday, 16 February, 2022

We covered the what, when and why of Service Mesh in a previous post. Now I’d like to talk about why they are critical in Kubernetes. 

To understand the importance of using service meshes when working with microservices-based applications, let’s start with a story.  

Suppose that you are working on a big microservices-based banking application, where any mistake can have serious impacts. One day the development team receives a feature request to add a rating functionality to the application. The solution is obvious: create a new microservice that can handle user ratings. Now comes the hard part. The team must come up with a reasonable time estimate to add this new service.  

The team estimates that the rating system can be finished in 4 sprints. The manager is angry. He cannot understand why it is so hard to add a simple rating functionality to the app.  

To understand the estimate, let’s understand what we need to do in order to have a functional rating microservice. The CRUD (Create, Read, Update, Delete) part is easy — just simple coding. But adding this new project to our microservices-based application is not trivial. First, we have to implement authentication and authorization, then we need some kind of tracing to understand what is happening in our application. Because the network is not reliable (unstable connections can result in data loss), we have to think about solutions for retries, circuit breakers, timeouts, etc.  

We also need to think about deployment strategies. Maybe we want to use shadow deployments to test our code in production without impacting the users. Maybe we want to add A/B testing capabilities or canary deployments. So even if we create just a simple microservice, there are lots of cross-cutting concerns that we have to keep in mind.  

Sometimes it is much easier to add new functionality to an existing service than create a new service and add it to our infrastructure. It can take a lot of time to deploy a new service, add authentication and authorization, configure tracing, create CI/CD pipelines, implement retry mechanisms and more. But adding the new feature to an existing service will make the service too big. It will also break the rule of single responsibility, and like many existing microservices projects, it will be transformed into a set of connected macroservices or monoliths. 

We call this the cross-cutting concerns burden — the fact that in each microservice you must reimplement the cross-cutting concerns, such as authentication, authorization, retry mechanisms and rate limiting. 

What is the solution to this burden? Is there a way to implement all these concerns once and inject them into every microservice, so the development team can focus on producing business value? The answer is Istio.  

Set Up a Service Mesh in Kubernetes Using Istio  

Istio solves these issues using sidecars, which it automatically injects into your pods. Your services won’t communicate directly with each other — they’ll communicate through sidecars. The sidecars will handle all the cross-cutting concerns. You define the rules once, and these rules will be injected automatically into all of your pods.   

Sample Application 

Let’s put this idea into practice. We’ll build a sample application to explain the basic functionalities and structure of Istio.  

In the previous post, we created a service mesh by hand, using envoy proxies. In this tutorial, we will use the same services, but we will configure our Service Mesh using Istio and Kubernetes.  

The image below depicts that application architecture.  

 

  1. Kubernetes(we used the 1.21.3 version in this tutorial) 
  1. Helm (we used the v2) 
  1. Istio (we used 1.1.17) - setup tutorial 
  1. Minikube, K3s or Kubernetes cluster enabled in Docker 

Git Repository 

My Stupid Simple Service Mesh in Kubernetes repository contains all the scripts for this tutorial. Based on these scripts you can configure any project. 

Running Our Microservices-Based Project Using Istio and Kubernetes 

As I mentioned above, step one is to configure Istio to inject the sidecars into each of your pods from a namespace. We will use the default namespace. This can be done using the following command: 

kubectl label namespace default istio-injection=enabled 

In the second step, we navigate into the /kubernetes folder from the downloaded repository, and we apply the configuration files for our services: 

kubectl apply -f service1.yaml 
kubectl apply -f service2.yaml 
kubectl apply -f service3.yaml 

After these steps, we will have the green part up and running: 

 

For now, we can’t access our services from the browser. In the next step, we will configure the Istio Ingress and Gateway, allowing traffic from the exterior. 

The gateway configuration is as follows: 

apiVersion: networking.istio.io/v1alpha3 
kind: Gateway 
metadata:   
    name: http-gateway 
spec: 
    selector:  
        istio: ingressgateway 
    servers: 
        - port: 
            number: 80 
            name: http 
            protocol: HTTP 
        hosts:    - “*”  

Using the selector istio: ingressgateway, we specify that we would like to use the default ingress gateway controller, which was automatically added when we installed Istio. As you can see, the gateway allows traffic on port 80, but it doesn’t know where to route the requests. To define the routes, we need a so-called VirtualService, which is another custom Kubernetes resource defined by Istio. 

apiVersion: networking.istio.io/v1b 
kind: VirtualService 
metadata: 
    name: sssm-virtual-services 
spec: 
    hosts:  - "*" 
    gateways:  - http-gateway 
    http:   
        - match: 
            - uri: 
                prefix: /service1 
            route: 
                - destination: 
                    host: service1 
                    port: 
                        number: 80 
        - match: 
            - uri: 
                prefix: /service2 
            route: 
                - destination: 
                    host: service2 
                    port: 
                        number: 80 

The code above shows an example configuration for the VirtualService. In line 7, we specified that the virtual service applies to the requests coming from the gateway called http-gateway and from line 8 we define the rules to match the services where the requests should be sent. Every request with /service1 will be routed to the service1 container while every request with /service2 will be routed to the service2 container. 

At this step, we have a working application. Until now there is nothing special about Istio — you can get the same architecture with a simple Kubernetes Ingress controller, without the burden of sidecars and gateway configuration.  

Now let’s see what we can do using Istio rules. 

Security in Istio 

Without Istio, every microservice must implement authentication and authorization. Istio removes the responsibility of adding authentication and authorization from the main container (so developers can focus on providing business value) and moves these responsibilities into its sidecars. The sidecars can be configured to request the access token at each call, making sure that only authenticated requests can reach our services. 

apiVersion: authentication.istio.io/v1beta1 
kind: Policy 
metadata: 
    name: auth-policy 
spec:   
    targets:   
        - name: service1   
        - name: service2   
        - name: service3  
        - name: service4   
        - name: service5   
    origins:  
    - jwt:       
        issuer: "{YOUR_DOMAIN}"      
        jwksUri: "{YOUR_JWT_URI}"   
    principalBinding: USE_ORIGIN 

As an identity and access management server, you can use Auth0, Okta or other OAuth providers. You can learn more about authentication and authorization using Auth0 with Istio in this article. 

Traffic Management Using Destination Rules 

Istio’s official documentation says that the DestinationRule “defines policies that apply to traffic intended for a service after routing has occurred.” This means that the DestionationRule resource is situated somewhere between the Ingress controller and our services. Using DestinationRules, we can define policies for load balancing, rate limiting or even outlier detection to detect unhealthy hosts.  

Shadowing 

Shadowing, also called Mirroring, is useful when you want to test your changes in production silently, without affecting end users. All the requests sent to the main service are mirrored (a copy of the request) to the secondary service that you want to test. 

Shadowing is easily achieved by defining a destination rule using subsets and a virtual service defining the mirroring route.  

The destination rule will be defined as follows: 

apiVersion: networking.istio.io/v1beta1 
kind: DestinationRule 
metadata:   
    name: service2 
spec:   
    host: service2 
    subsets:   
    - name: v1      
      labels:       
          version: v1 
    - name: v2     
      labels:       
          version: v2 

As we can see above, we defined two subsets for the two versions.  

Now we define the virtual service with mirroring configuration, like in the script below: 

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata:   
    name: service2 
spec:   
    hosts:     
    - service2   
    http:   
    - route:     
        - destination:         
          host: service2 
          subset: v1            
        mirror:       
            host: service2 
            subset: v2 

In this virtual service, we defined the main destination route for service2 version v1. The mirroring service will be the same service, but with the v2 version tag. This way the end user will interact with the v1 service, while the request will also be sent also to the v2 service for testing. 

Traffic Splitting 

Traffic splitting is a technique used to test your new version of a service by letting only a small part (a subset) of users to interact with the new service. This way, if there is a bug in the new service, only a small subset of end users will be affected.  

This can be achieved by modifying our virtual service as follows: 

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata:   
    name: service2 
spec:   
    hosts:     
    - service2  
    http:   
    - route:     
        - destination:         
              host: service2         
              subset: v1       
         weight: 90            
         - destination:         
               host: service2 
               subset: v2       
         weight: 10    

The most important part of the script is the weight tag, which defines the percentage of the requests that will reach that specific service instance. In our case, 90 percent of the request will go to the v1 service, while only 10 percent of the requests will go to v2 service. 

Canary Deployments 

In canary deployments, newer versions of services are incrementally rolled out to users to minimize the risk and impact of any bugs introduced by the newer version. 

This can be achieved by gradually decreasing the weight of the old version while increasing the weight of the new version. 

A/B Testing 

This technique is used when we have two or more different user interfaces and we would like to test which one offers a better user experience. We deploy all the different versions and we collect metrics about the user interaction. A/B testing can be configured using a load balancer based on consistent hashing or by using subsets. 

In the first approach, we define the load balancer like in the following script: 

apiVersion: networking.istio.io/v1alpha3 
kind: DestinationRule 
metadata:   
    name: service2 
spec:   
    host: service2 
    trafficPolicy:     
        loadBalancer:       
            consistentHash:         
                httpHeaderName: version 

As you can see, the consistent hashing is based on the version tag, so this tag must be added to our service called “service2”, like this (in the repository you will find two files called service2_v1 and service2_v2 for the two different versions that we use): 

apiVersion: apps/v1 
kind: Deployment 
metadata:   
    name: service2-v2   
    labels:     
        app: service2 
spec:   
    selector:     
        matchLabels:       
            app: service2   
    strategy:     
        type: Recreate   
    template:     
        metadata:      
            labels:         
                app: service2         
                version: v2     
        spec:       
            containers:       
            - image: zoliczako/sssm-service2:1.0.0         
              imagePullPolicy: Always         
              name: service2         
              ports:           
              - containerPort: 5002         
              resources:           
                  limits:             
                      memory: "256Mi"             
                      cpu: "500m" 

The most important part to notice is the spec -> template -> metadata -> version: v2. The other service has the version: v1 tag. 

The other solution is based on subsets. 

Retry Management 

Using Istio, we can easily define the maximum number of attempts to connect to a service if the initial attempt fails (for example, in case of overloaded service or network error). 

The retry strategy can be defined by adding the following lines to the end of our virtual service: 

retries:   
    attempts: 5 
    perTryTimeout: 10s 

With this configuration, our service2 will have five retry attempts in case of failure and it will wait 10 seconds before returning a timeout. 

Learn more about traffic management in this article. You’ll find a great workshop to configure an end-to-end service mesh using Istio here. 

Conclusion 

In this chapter, we learned how to set up and configure a service mesh in Kubernetes using Istio. First, we configured an ingress controller and gateway and then we learned about traffic management using destination rules and virtual services.  

Want to Learn More from our Stupid Simple Series?

Read our eBook: Stupid Simple Kubernetes. Download it here!

Scale Your Infrastructure with Cloud Native Technology

Wednesday, 16 February, 2022

When business is growing rapidly, the necessity to scale the processes is obvious. If your initial infrastructure hasn’t been thought through with scalability in mind, growing your infrastructure may be quite painful. The common tactic, in this case, is to transition to cloud native architecture.

In this post, we will talk about what you need to know when you’re scaling up with the cloud so that you can weigh the pros and cons and make an informed decision. 

What is Cloud Technology?

Cloud computing is the on-demand delivery of IT resources—applications, storage, databases, networking and more—over the Internet (“the cloud”). It has quickly become popular because it allows enterprises to expand without extra work to manage their resources. Cloud services providers can provide you with as much storage space as you need, regardless of how big the organization is. Cloud native computing is a programming approach that is designed to take advantage of the cloud computing model. It uses open source software that supports its three key elements: Containerization, orchestration of the containers and microservices.

Why Do You Need the Cloud in Your Organization? 

In 2021, 94% of companies used cloud technology in some capacity. This huge popularity can be attributed to several reasons:

Convenience

As we’ve already mentioned, scalability is one of the main advantages that make businesses transition to this model. With on-premise storage, you have to purchase new equipment, set up servers and even expand your team in the case of drastic growth. But with the cloud, you only need to click a couple of buttons to expand your cloud storage size and make a payment, which is, of course, much simpler.

Flexibility

Cloud native architecture makes your company more flexible and responsive to the needs of both clients and employees. Your employees can enjoy the freedom of working from any place on their own devices. Having a collaborative space is rated by both business owners and employees as very important. 

Being able to access and edit files in the cloud easily is also crucial when working with clients. Your company and clients can build an efficient working relationship regardless of the geographic location.

Cost

Data that companies need to store accumulates quickly, fueled by new types of workloads. However, your costs can’t grow at the same pace.

Cloud services allow you to spend more responsibly; necessary IT resources can be rented for as much time as you need and easily canceled. Companies that work in industries facing sharp seasonal increases in the load on information systems especially benefit from the cloud.

Types of Cloud Native Solutions

Cloud native solutions is an umbrella term for different services. You can choose the model that works best for you. 

Platform as a Service (PaaS)

Platform as a service is a cloud environment that contains everything you need to support the full lifecycle of cloud applications. You avoid the complexities and costs associated with hardware and software setup.

Infrastructure as a Service (IaaS)

Infrastructure as a service enables companies to rent servers and data centers instead of building up their own from zero. You get an all-inclusive solution so that you can start scaling your business processes in no time. However, the implementation of IaaS can result in a large overhead.

Software as a Service (SaaS)

In this model, your applications run on remote computers “in the cloud.” These servers are owned and maintained by other companies. The connection between these computers and users’ computers happens via the internet, usually using a Web browser.

Cloud Deployment Models: Public vs. Private

Cloud comes in many types that you can use based on your business needs: public cloud, private cloud, hybrid cloud, and multi-cloud. Let’s find out which one fits your organization.

Public Cloud

Public clouds are run by companies that offer fast access to low-cost computing resources over the public network. With public cloud services, users do not need to purchase hardware, software, and underlying infrastructure—in other words, the service provider decides.

Private Cloud

A private cloud is an infrastructure for one organization only, managed internally or by third parties, and located on or off the organization’s premises. Private clouds can take advantage of public cloud environments and at the same time ensure greater control over resources and avoid the problems associated with working on a collective lease.

Hybrid Cloud

In a hybrid cloud, a private cloud is used as the foundation, combined with strategic integration and public cloud services. Most companies with private clouds will eventually move to workload management across multiple data centers, private clouds, and public clouds — that is, they will move to hybrid clouds.

Multi-Cloud

Many organizations adopt various cloud services to drive innovation and increase business agility, including generating new revenue streams, adding products and services, and increasing profits. With its wide range of potential benefits, multi-cloud environments are essential to the survival and success of the digital era.

Cloud Services as Business Tools

Some companies need the cloud more than others. Industries that can greatly benefit from cloud adoption are retail, insurance, and hospitality. 

Using cloud resources, companies in these industries organize backup data processing centers (RDCs) and ensure the necessary infrastructure for creating and debugging applications, storing archives, etc.

However, any company can benefit from cloud adoption, especially if your employees work collaboratively with documents, files, and other types of content. Small and medium-sized businesses are increasingly interested in platform services, such as cloud database management systems, and large companies organize information storage from disparate sources in the cloud.

How to Make Transformation Painless

Before you transform your processes:

-Start with the education of your team.

-Talk to your teammates about how moving to the cloud will help them perform daily tasks more easily. Your colleagues might not immediately understand that cloud solutions provide better collaboration or higher security options.

-Ensure that they have the necessary resources to explore and learn about new tools.

Any cloud service providers such as Amazon provide coaching. Depending on the resources, you can hire new team members that already have the necessary competencies to facilitate the transition. Just remember that to be painless, cloud migration should happen in an organized and step-by-step way.

There can be quite a few options for cloud migration. At first, you can migrate only part of your workload to the cloud while combining it with the on-premises approach. 

Cloud Transformation Stages

Now let’s talk a bit more about cloud transformation stages. They may differ based on the company’s needs and can be carried out independently or with the involvement of external experts for consultations. 

Developing a Migration Strategy

The first step to a successful migration to the cloud is to develop a business plan where you define the needs of your business, set up goals, and agree on technical aspects. Usually, you perform one or more brainstorming sessions with your internal team and then perfect the model you have with your third-party consultants or service provider. You need to decide which type of cloud product you prefer and choose your deployment method.

Auditing the Company’s Existing IT Infrastructure

To add details to your cloud adoption strategy, you need to audit the company’s infrastructure. Application rationalization is the process of going through all the applications used in the company to determine which to keep and which to let go of. Most companies are doing just that before any efforts to move to the cloud. During this stage, you identify the current bottlenecks that should be solved with the adoption of cloud native architecture. 

Drawing a Migration Roadmap

Together with your team or service provider, you develop a migration roadmap. It should contain the main milestones; for example, it can describe by what time different departments of your company should migrate to the cloud. You might connect with several cloud services providers to negotiate the best conditions for yourself at this stage. 

Migration

Migration to the cloud can take up to several months. However, after migration, you and your employees will transition where you adapt to the new work environment.

Optimization

Difficulties (including technical ones) can arise at every stage. Any migration involves some downtime; that needs to be planned so that the business is not harmed. Often there are problems associated with non-standard infrastructure, or there is a need to implement additional solutions. During the optimization stage, you identify the problems that need to be fixed and develop a defined strategy.

Cloud migration can seem like a tedious process at first. But the benefits that it provides to businesses are worth it. If you choose a cloud product based on your business needs that prepare a long-lasting implementation strategy and dedicate enough time to audit and optimization, you will be pleasantly surprised with the transformation of your processes.

Summing up

Many companies are now transitioning to cloud native technology to scale their infrastructure because it’s more flexible, convenient, and allows cost reduction. Your team can choose from different types of cloud depending on your priorities, whether it be on-premise cloud or IaaS.

Cloud native technology transformation will help you scale your infrastructure and expand your business globally. If you are searching for ways to make your company more flexible to meet both the needs of your employees and your clients, cloud migration might be the best choice for you. 

Join the Conversation!

What’s your cloud transformation story? Join the SUSE & Rancher Community where you’ll find resources to support you in your cloud native journey — from introductory and advanced courses and like-minded peers to offer support.

Kubewarden: Deep Dive into Policy Logging    

Monday, 22 November, 2021
Policies are regular programs. As such, they often need to log information. In general, we are used to making our programs log into standard output (stdout) and standard error (stderr) outputs.

However, policies run in a confined WebAssembly environment. For this mechanism to work per usual, Kubewarden would need to set up the runtime environment so the policy can write to stdout and stderr file descriptors. Upon completion, Kubewarden can check them – or stream log messages as they pop up.

Given that Kubewarden uses waPC for allowing intercommunication between the guest (the policy) and the host (Kubewarden – the policy-server or kwctl if we are running policies manually), we have extended our language SDK’s so that they can log messages by using waPC internally.

Kubewarden has defined a contract between policies (guests) and the host (Kubewarden) for performing policy settings validationpolicy validationpolicy mutation, and logging.

The waPC interface used for logging is a contract because once you have built a policy, it should be possible to run it in future Kubewarden versions. In this sense, Kubewarden keeps this contract behind the SDK of your preferred language, so you don’t have to deal with details of how logging is implemented in Kubewarden. You must use your logging library of choice for the language you are working with.

Let’s look at how to take advantage of logging in with Kubewarden in specific languages!

For Policy Authors

Go

We are going to use the Go policy template as a starting point.

Our Go SDK provides integration with the onelog library. When our policy is built for the WebAssembly target, it will send the logs to the host through waPC. Otherwise, it will just print them on stderr – but this is only relevant if you run your policy outside a Kubewarden runtime environment.

One of the first things our policy does on its main.go file is to initialize the logger:

var (
    logWriter = kubewarden.KubewardenLogWriter{}
    logger    = onelog.New(
        &logWriter,
        onelog.ALL, // shortcut for onelog.DEBUG|onelog.INFO|onelog.WARN|onelog.ERROR|onelog.FATAL
    )
)

We are then able to use onelog API to produce log messages. We could, for example, perform structured logging with debugging level:

logger.DebugWithFields("validating object", func(e onelog.Entry) {
    e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
    e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})

Or, with info level:

logger.InfoWithFields("validating object", func(e onelog.Entry) {
    e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
    e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})

What happens under the covers is that our Go SDK sends every log event to the kubewarden host through waPC.

Rust

Let’s use the Rust policy template as our guide.

Our Rust SDK implements an integration with the slog crate. This crate exposes the concept of drains, so we have to define a global drain that we will use throughout our policy code:

use kubewarden::logging;
use slog::{o, Logger};
lazy_static! {
    static ref LOG_DRAIN: Logger = Logger::root(
        logging::KubewardenDrain::new(),
        o!("some-key" => "some-value") // This key value will be shared by all logging events that use
                                       // this logger
    );
}

Then, we can use the macros provided by slog to log on to different levels:

use slog::{crit, debug, error, info, trace, warn};

Let’s log an info-level message:

info!(
    LOG_DRAIN,
    "rejecting resource";
    "resource_name" => &resource_name
);

As with the Go SDK implementation, our Rust implementation of the slog drain sends this logging events to the host by using waPC.

You can read more about slog here.

Swift

We will be looking at the Swift policy template for this example.

As happens with Go and Rust’s SDKs, the Swift SDK is instrumented to use Swift’s LogHandler from the swift-log project, so our policy only has to initialize it. In our Sources/Policy/main.swift file:

import kubewardenSdk
import Logging

LoggingSystem.bootstrap(PolicyLogHandler.init)

Then, in our policy business logic, under Sources/BusinessLogic/validate.swift we can log with different levels:

import Logging

public func validate(payload: String) -> String {
    // ...

    logger.info("validating object",
        metadata: [
            "some-key": "some-value",
        ])

    // ...
}

Following the same strategy as the Go and Rust SDKs, the Swift SDK can push log events to the host through waPC.

For Cluster Administrators

Being able to log from within a policy is half of the story. Then, we have to be able to read and potentially collect these logs.

As we have seen, Kubewarden policies support structured logging that is then forwarded to the component running the policy. Usually, this is kwctl if you are executing the policy in a manual fashion, or policy-server if the policy is running in a Kubernetes environment.

Both kwctl and policy-server use the tracing crate to produce log events, either the events produced by the application itself or by policies running in WebAssembly runtime environments.

kwctl

The kwctl CLI tool takes a very straightforward approach to logging from policies: it will print them to the standard error file descriptor.

policy-server

The policy-server supports different log formatsjsontext and otlp.

otlp? I hear you ask. It stands for OpenTelemetry Protocol. We will look into that in a bit.

If the policy-server is run with the --log-fmt argument set to json or text, the output will be printed to the standard error file descriptor in JSON or plain text formats. These messages can be read using kubectl logs <policy-server-pod>.

If --log-fmt is set to otlp, the policy-server will use OpenTelemetry to report logs and traces.

OpenTelemetry

Kubewarden is instrumented with OpenTelemetry, so it’s possible for the policy-server to send trace events to an OpenTelemetry collector by using the OpenTelemetry Protocol (otlp).

Our official Kubewarden Helm Chart has certain values that allow you to deploy Kubewarden with OpenTelemetry support, reporting logs and traces to, for example, a Jaeger instance:

telemetry:
  enabled: True
  tracing:
    jaeger:
      endpoint: "all-in-one-collector.jaeger.svc.cluster.local:14250"

This functionality closes the gap on logging/tracing, given the freedom that the OpenTelemetry collector provides to us regarding flexibility of what to do with these logs and traces.

You can read more about Kubewarden’s integration with OpenTelemetry in our documentation.

But this is a big enough topic on its own and worth a future blog post. Stay logged!

Tags: ,, Category: Kubernetes Comments closed

Is Cloud Native Development Worth It?    

Thursday, 18 November, 2021
The ‘digital transformation’ revolution across industries enables businesses to develop and deploy applications faster and simplify the management of such applications in a cloud environment. These applications are designed to embrace new technological changes with flexibility.

The idea behind cloud native app development is to design applications that leverage the power of the cloud, take advantage of its ability to scale, and quickly recover in the event of infrastructure failure. Developers and architects are increasingly using a set of tools and design principles to support the development of modern applications that run on public, private, and hybrid cloud environments.

Cloud native applications are developed based on microservices architecture. At the core of the application’s architecture, small software modules, often known as microservices, are designed to execute different functions independently. This enables developers to make changes to a single microservice without affecting the entire application. Ultimately, this leads to a more flexible and faster application delivery adaptable to the cloud architecture.

Frequent changes and updates made to the infrastructure are possible thanks to containerization, virtualization, and several other aspects constituting the entire application development being cloud native. But the real question is, is cloud native application development worth it? Are there actual benefits achieved when enterprises adopt cloud native development strategies over the legacy technology infrastructure approach? In this article, we’ll dive deeper to compare the two.

Should  You Adopt a Cloud Native over Legacy Application Development Approach?

Cloud computing is becoming more popular among enterprises offering their technology solutions online. More tech-savvy enterprises are deploying game-changing technology solutions, and cloud native applications are helping them stay ahead of the competition. Here are some of the major feature comparisons of the two.

Speed

While customers operate in a fast-paced, innovative environment, frequent changes and improvements to the infrastructure are necessary to keep up with their expectations. To keep up with these developments, enterprises must have the proper structure and policies to conveniently improve or bring new products to market without compromising security and quality.

Applications built to embrace cloud native technology enjoy the speed at which their improvements are implemented in the production environment, thanks to the following features.

Microservices

Cloud native applications are built on microservices architecture. The application is broken down into a series of independent modules or services ,with each service consuming appropriate technology stack and data. Communication between modules is often done over APIs and message brokers.

Microservices frequently improve the code to add new features and functionality without interfering with the entire application infrastructure. Microservices’ isolated nature makes it easier for new developers in the team to comprehend the code base and make contributions faster. This approach facilitates speed and flexibility at which improvements are being made to the infrastructure. In comparison,  an infrastructure consuming the monolithic architecture would slowly see new features and enhancements being pushed to production. Monolithic applications are complex and tightly coupled, meaning slight code changes must be harmonized to avoid failures. As a result, this slows down the deployment process.

CI/CD Automation Concepts

The speed at which applications are developed, deployed, and managed has primarily been attributed to adopting Continuous Integration and Continuous Development (CI/CD).

Improvement strategies include new code changes to the infrastructure through an automated checklist in a CI/CD pipeline and testing that application standards are met before being pushed to a production environment.

When implemented on cloud native applications architecture, CI/CD streamlines the entire development and deployment phases, shortening the time in which the new features are delivered to production.

Implementing CI/CD highly improves productivity in organizations to everyone’s benefit. Automated CI/CD pipelines make deployments predictable, freeing developers from repetitive tasks to focus on higher-value tasks.

On-demand infrastructure Scaling

Enterprises should opt for cloud native architecture over traditional application development approaches to easily provision computing resources to their infrastructure on demand.

Rather than having IT support applications based on estimates of what infrastructure resources are needed, the cloud native approach promotes automated provisioning of computing resources on demand.

This approach helps applications run smoothly by continuously monitoring the health of your infrastructure for workloads that would otherwise fail.

The cloud native development approach is based on orchestration technology that provides developers insights and control to scale the infrastructure to the organization’s liking. Let’s look at how the following features help achieve infrastructure scaling.

Containerization

Cloud native applications are built based on container technology where microservices, operating system libraries, and dependencies are bundled together to create single lightweight executables called container images.

These container images are stored in an online registry catalog for easy access by the runtime environment and developers making updates on them.

Microservices deployed as containers should be able to scale in and out, depending on the load spikes.

Containerization promotes portability by ensuring the executable packaging is uniform and runs consistently across the developer’s local and deployment environments.

Orchestration

Let’s talk orchestration in cloud native application development. Orchestration automates deploying, managing, and scaling microservice-based applications in containers.

Container orchestration tools communicate with user-created schedules (YAML, JSON files) to describe the desired state of your application. Once your application is deployed, the orchestration tool uses the defined specifications to manage the container throughout its lifecycle.

Auto-Scaling

Automating cloud native workflows ensures that the infrastructure automatically self-provisions itself when in need of resources. Health checks and auto-healing features are implemented in the infrastructure when under development to ensure that the infrastructure runs smoothly without manual intervention.

You are less likely to encounter service downtime because of this. Your infrastructure is automatically set to auto-detect an increase in workloads that would otherwise result in failure and automatically scales to a working machine.

Optimized Cost of Operation

Developing cloud native applications eliminates the need for hardware data centers that would otherwise sit idle at any given point. The cloud native architecture enables a pay-per-use service model where organizations only pay for the services they need to support their infrastructure.

Opting for a cloud native approach over a traditional legacy system optimizes the cost incurred that would otherwise go toward maintenance. These costs appear in areas such as scheduled security improvements, database maintenance, and managing frequent downtimes. This usually becomes a burden for the IT department and can be partially solved by migrating to the cloud.

Applications developed to leverage the cloud result in optimized costs allocated to infrastructure management while maximizing efficiency.

Ease of Management

Cloud native service providers have built-in features to manage and monitor your infrastructure effortlessly. A good example, in this case, is serverless platforms like AWS Lambda and  Azure Functions. These platforms help developers manage their workflows by providing an execution environment and managing the infrastructure’s dependencies.

This gets rid of uncertainty in dependencies version and configuration settings required to run the infrastructure. Developing applications that run on legacy systems requires developers to update and maintain the dependencies manually. Eventually, this becomes a complicated practice with no consistency. Instead, the cloud native approach makes collaborating easier without having the “This application works on my system but fails on another machine ” discussion.

Also, since the application is divided into smaller, manageable microservices, developers can easily focus on specific units without worrying about interactions between them.

Challenges

Unfortunately, there are challenges to ramping up users to adopt the new technology, especially for enterprises with long-standing legacy applications. This is often a result of infrastructure differences and complexities faced when trying to implement cloud solutions.

A perfect example to visualize this challenge would be assigning admin roles in Azure VMware solutions. The CloudAdmin role would typically create and manage workloads in your cloud, while in an Azure VMware Solution, the cloud admin role has privileges that conflict with the VMware cloud solutions and on-premises.

It is important to note that in the Azure VMware solution, the cloud admin does not have access to the administrator user account. This revokes the permission roles to add identity sources like on-premises servers to vCenter, making infrastructure role management complex.

Conclusion

Legacy vs. Cloud Native Application Development: What’s Best?

While legacy application development has always been the standard baseline structure of how applications are developed and maintained, the surge in computing demands pushed for the disruption of platforms to handle this better.

More enterprises are now adopting the cloud native structure that focuses on infrastructure improvement to maximize its full potential. Cloud native at scale is a growing trend that strives to reshape the core structure of how applications should be developed.

Cloud native application development should be adopted over the legacy structure to embrace growing technology trends.

Are you struggling with building applications for the cloud?  Watch our 4-week On Demand Academy class, Accelerate Dev Workloads. You’ll learn how to develop cloud native applications easier and faster.

Introduction to Cloud Native Application Architecture    

Wednesday, 17 November, 2021
Today, it is crucial that an organization’s application’s scalability matches its growth tempo. If you want your client’s app to be robust and easy to scale, you have to make the right architectural decisions.

Cloud native applications are proven more efficient than their traditional counterparts and much easier to scale due to containerization and running in the cloud.

In this blog, we’ll talk about what cloud native applications are and what benefits this architecture brings to real projects.

What is Cloud Native Application Architecture?

Cloud native is an approach to building and running apps that use the cloud. In layman’s terms, companies that use cloud native architecture are more likely to create new ideas, understand market trends and respond faster to their customers’ requests.

Cloud native applications are tied to the underlying infrastructure needed to support them. Today, this means deploying microservices through containers to dynamically provision resources according to user needs.

Each microservice can independently receive and transmit data through the service-level APIs. Although not required for an application to be considered “cloud native” due to modularity, portability, and granular resource management, microservices are a natural fit for running applications in the cloud.

Scheme of Cloud Native Application

Cloud native application architecture consists of frontend and backend. 

  • The client-side or frontend is the application interface available for the end-user. It has protocols and ports configured for user-database access and interaction. An example of this is a web browser. 
  • The server-side or backend refers to the cloud itself. It consists of resources providing cloud computing services. It includes everything you need, like data storage, security, and virtual machines.

All applications hosted on the backend cloud server are protected due to built-in engine security, traffic management, and protocols. These protocols are intermediaries, or middleware, for establishing successful communication with each other.

What Are the Core Design Principles of Cloud Native Architecture?

To create and use cloud native applications, organizations need to rethink the approach to the development system and implement the fundamental principles of cloud native.

DevOps

DevOps is a cultural framework and environment in which software is created, tested, and released faster, more frequently, and consistently. DevOps practices allow developers to shorten software development cycles without compromising on quality.

CI/CD

Continuous integration (CI) is the automation of code change integration when numerous contributions are made to the same project. CI is considered one of the main best practices of DevOps culture because it allows developers to merge code more frequently into the central repository, where they are subject to builds and tests.

Continuous delivery (CD) is the process of constantly releasing updates, often through automated delivery. Continuous delivery makes the software release process reliable, and organizations can quickly deliver individual updates, features, or entire products.

Microservices

Microservices are an architectural approach to developing an application as a collection of small services; each service implements a business opportunity, starts its process, and communicates through its own API.

Each microservice can be deployed, upgraded, scaled, and restarted independently of other services in the same application, usually as part of an automated system, allowing frequent updates to live applications without impacting customers.

Containerization

Containerization is a software virtualization technique conducted at the operating system level and ensures the minimum use of resources required for the application’s launch.

Using virtualization at the operating system level, a single OS instance is dynamically partitioned into one or more isolated containers, each with a unique writeable file system and resource quota.

The low overhead of creating and deleting containers and the high packing density in a single VM make containers an ideal computational tool for deploying individual microservices.

Benefits of Cloud Native Architecture

Cloud native applications are built and deployed quickly by small teams of experts on platforms that provide easy scalability and hardware decoupling. This approach provides organizations greater flexibility, resiliency, and portability in cloud environments.

Strong Competitive Advantage

Cloud-based development is a transition to a new competitive environment with many convenient tools, no capital investment, and the ability to manage resources in minutes. Companies that can quickly create and deliver software to meet customer needs are more successful in the software age.

Increased Resilience

Cloud native development allows you to focus on resilience tools. The rapidly evolving cloud landscape helps developers and architects design systems that remain interactive regardless of environment freezes.

Improved Flexibility

Cloud systems allow you to quickly and efficiently manage the resources required to develop applications. Implementing a hybrid or multi-cloud environment will enable developers to use different infrastructures to meet business needs.

Streamlined Automation and Transformation

The automation of IT management inside the enterprise is a springboard for the effective transformation of other departments and teams.

In addition, it eliminates the risk of disruption due to human error as employees focus on controlling routine tasks rather than performing them directly.

Automated real-time patches and updates across all stack levels eliminate downtime and the need for operational experts with “manual management” expertise.

Comparison: Cloud Native Architecture vs. Legacy Architecture

The capabilities of the cloud allow both traditional monolithic applications and data operations to be transferred to it. However, many enterprises prefer to invest in a cloud native architecture from the start. Here is why:

Separation of Computation and Data Storage Improves Scalability

Datacenter servers are usually connected to direct-attached storage (DAS), which an enterprise can use to store temporary files, images, documents, or other purposes.

Relying on this model is dangerous because its processing power needs can rise and fall in very different ways than storage needs. The cloud enables object storage such as AWS S3 or ADLS, which can be purchased, optimized, and managed separately from computing requirements.

This way, you can easily add thousands of new users or expand the app’s functionality.

Cloud Object Storage Gives Better Adaptability

Cloud providers are under competitive pressure to improve and innovate in their storage services. Application architects who monitor closely and quickly adapt to these innovations will have an edge over competitors who have taken a wait-and-see attitude.

Alongside proprietary solutions, there are also many open source, cloud computing software projects like Rancher.

This container management platform provides users with a complete software stack that facilitates Kubernetes cluster management in a private or public cloud.

Cloud Native Architecture is More Reliable

The obvious advantage for those companies that have adopted a native cloud approach is the focus on agility, automation, and simplification.

For complex IT or business functions, their survival depends on the level of elaboration of their services. On the other hand, you need error protection to improve user productivity through increased levels of automation, built-in predictive intelligence, or machine learning to help keep your environment running optimally.

Cloud Native Architecture Makes Inter-Cloud Migration Easy

Every cloud provider has its cloud services (e.g., data warehousing, ETL, messaging) and provides a rich set of ready-made open source tools such as Spark, Kafka, MySQL, and many others.

While it sounds bold to say that using open source solutions makes it easy to move from one cloud to another, if cloud providers offer migration options, you won’t have to rewrite a significant part of the existing functionality.

Moreover, many IT architects see the future in the multi-cloud model, as many companies already deal with two or more cloud providers.

If your organization can skillfully use cloud services from different vendors, then the ability to determine the advantage of one cloud over another is good groundwork for the future justification of your decision.

Conclusion

Cloud native application architecture provides many benefits. This approach automates and integrates the concepts of continuous delivery, microservices, and containers for enhanced quality and streamlined delivery.

Applications that are built as cloud native can offer virtually unlimited computing power on demand. That’s why more and more developers today are choosing to build and run their applications as cloud native.

Want to make sure you don’t miss any of the action? Join the SUSE & Rancher Community to get updates on new content coming your way!

Refactoring Isn’t the Same for All    

Tuesday, 9 November, 2021

Cloud Native: it’s been an industry buzzword for a few years now. It holds different meanings for different people, and even then a different context. While we have overused this word, it does have a place when it comes to modernizing applications.

To set the context here, we are talking about apps you would build in the cloud rather than for it. This means these apps, if modernized, would run in a cloud platform. In this post, we will discuss how “refactoring,” as Gartner puts it, isn’t the same for every app.

When we look at legacy applications sitting in data centers across the globe, some are traditional mainframes; others are “Custom off the Shelf Software” (CotS). We care about the business-critical apps we can leverage for the cloud. Some of these are CotS, and many of these applications are custom.

When it comes to the CotS, companies should rely on the vendor to modernize their CotS to a cloud platform. This is the vendor’s role, and there is little business value in a company doing it for them.

Gartner came up with the five R’s: Rehost, Refactor, Revise, Rebuild and Replace. But when we look at refactoring, it shouldn’t be the same for every app because not all apps are the same. Some are mission-critical; most of your company’s revenue is made with those apps. Some apps are used once a month to make accounting’s life easier. Both might need to be refactored, but not to the same level. When you refactor, you change the structure, architecture, and business logic. All to leverage core concepts and features of a cloud. This is why we break down refactoring into Scale of Cloud Native.

Custom apps are perfect candidates for modernization. With every custom app, modernization brings risks and rewards. Most systems depend on other technologies like libraries, subsystems, and even frameworks. Some of these dependencies are easy to modernize into a cloud platform, but not all are like this. Some pose considerable challenges that limit how much you can modernize.

If we look at what makes an app cloud native, we first have to acknowledge that this term means something different depending on who you ask; however, most of these concepts are at least somewhat universal. Some of these concepts are:

  • Configuration
  • Disposability
  • Isolation
  • Scalability
  • Logs

Outside of technical limitations, there’s the question of how much an application should be modernized. Do you go all in and rewrite an app to be fully cloud native? Or do you do the bare minimum to get the app to run in the cloud?

We delineate these levels of cloud native as Suitable, Compatible, Durable, and Native. These concepts build upon one another so that an app can be Compatible and, with some refactoring, can go to Durable.

What does all this actually mean? Well, let’s break them down based on a scale:

  • Suitable – First on the scale and the bare minimum you need to get your app running in your cloud platform. This could just be the containerization of the application, or that and a little more.
  • Compatible – Leveraging a few of the core concepts of the cloud. An app that is cloud-compatible leverages things like environmental configs and disposability. This is a step further than Suitable.
  • Durable – At this point, apps should be able to handle a failure in the system and not let it cascade, meaning the app can handle it when some underlying services are unavailable. Being Durable also means the app can start up fast and shut down gracefully. These apps are well beyond Suitable and Compatible.
  • Native – These apps leverage most, if not all, of the cloud native core concepts. Generally, this is done with brand-new apps being written in the cloud. It might not make sense to modernize an existing app to this level.

This scale isn’t absolute; as such, different organizations may use different scales. A scale is important to ensure you are not over or under-modernizing an app.

When starting any modernization effort, collectively set the scale. This should be done organizationally rather than team-by-team. When it comes to budget and timing, making sure that all teams use the same scale is critical.

Learn more about this in our Webinar, App Modernization: When and How Far to Modernize. Watch the replay, Register here. 

Want to make sure you don’t miss any of the action? Join the SUSE & Rancher Community to get updates on new content coming your way!

Stupid Simple Service Mesh: What, When, Why Part 1

Thursday, 26 August, 2021

Recently microservices-based applications became very popular, and with the rise of microservices, the concept of Service Mesh also became a very hot topic. Unfortunately, there are only a few articles about this concept and most of them are hard to digest.

In this blog, we will try to demystify the concept of Service Mesh using “Stupid Simple” explanations, diagrams, and examples to make this concept more transparent and accessible for everyone. In the first article, we will discuss the basic building blocks of a Service Mesh and implement a sample application to have a practical example of each theoretical concept. In the next articles, based on this sample app, we will touch on more advanced topics, like Service Mesh in Kubernetes, and we will talk about some more advanced Service Mesh implementations like IstioLinkerd, etc.

To understand the concept of Service Mesh, the first step is to understand what problems it solves and how it solves them.

Software architecture has evolved a lot in a short time, from classical monolithic architecture to microservices. Although many praise microservice architecture as the holy grail of software development, it introduces some serious challenges.

Overview of the sample application

For one, a microservices-based architecture means that we have a distributed system. Every distributed system has challenges such as transparencysecurityscalabilitytroubleshooting, and identifying the root cause of issues. In a monolithic system, we can find the root cause of a failure by tracing. But in a microservice-based system, each service can be written in different languages, so tracing is no trivial task. Another challenge is service-to-service communication. Instead of focusing on business logic, developers must take care of service discovery, handle connection errors, detect latency, and retry logic. Applying SOLID principles on the architecture level means that these kinds of network problems should be abstracted away and not mixed with the business logic. This is why we need Service Mesh.

Ingress Controller vs. API Gateway vs. Service Mesh

As I mentioned above, we need to apply SOLID principles on an architectural level. For this, it is important to set the boundaries between Ingress Controller, API Gateway, and Service Mesh and understand each one’s role and responsibility.

On a stupidly oversimplified level, these are the responsibilities of each concept:

  1. Ingress Controller: allows a single IP port to access all services from the cluster, so its main responsibilities are path mapping, routing and simple load balancing, like a reverse proxy
  2. API Gatewayaggregates and abstracts away APIs; other responsibilities are rate-limiting, authentication, and security, tracing, etc. In a microservices-based application, you need a way to distribute the requests to different services, gather the responses from multiple/all microservices, and then prepare the final response to be sent to the caller. This is what an API Gateway is meant to do. It is responsible for client-to-service communication, north-south traffic.
  3. Service Mesh: responsible for service-to-service communication, east-west traffic. We’ll dig more into the concept of Service Mesh in the next section.

Service Mesh and API Gateway have overlapping functionalities, such as rate limiting, security, service discovery, tracing, etc. but they work on different levels and solve different problems. Service Mesh is responsible for the flow of requests between services. API Gateway is responsible for the flow of requests between the client and the services, aggregating multiple services and creating and sending the final response to the client.

The main responsibility of an API gateway is to accept traffic from outside your network and distribute it internally, while the main responsibility of a service mesh is to route and manage traffic within your network. They are complementary concepts, and a well-defined microservices-based system should combine them to ensure application uptime and resiliency while ensuring that your applications are easily consumable.

What Does a Service Mesh Solve?

As an oversimplified and stupid simple definition, a Service Mesh is an abstraction layer hiding away and separating networking-related logic from business logic. This way developers can focus only on implementing business logic. We implement this abstraction using a proxy, which sits in the front of the service. It takes care of all the network-related problems. This allows the service to focus on what is really important: business logic. In a microservice-based architecture, we have multiple services, each with a proxy. Together, these proxies are called Service Mesh.

As best practices suggest, proxy and service should be in separate containers, so each container has a single responsibility. In the world of Kubernetes, the container of the proxy is implemented as a sidecar. This means that each service has a sidecar containing the proxy. A single Pod will contain two containers: the service and the sidecar. Another implementation is to use one proxy for multiple pods. In this case, the proxy can be implemented as a Deamonset. The most common solution is using sidecars. Personally, I prefer sidecars over Deamonsets because they keep the logic of the proxy as simple as possible.

Multiple Service Mesh solutions exist, including IstioLinkerdConsulKong, and Cilium. (We will talk about these solutions in a later article.) Let’s focus on the basics and understand the concept of Service Mesh, starting with Envoy. This is a high-performance proxy and not a complete framework or solution for Service Meshes (in this tutorial, we will build our own Service Mesh solution). Some of the Service Mesh solutions use Envoy in the background (like Istio), so before starting with these higher-level solutions, it’s a good idea to understand the low-level functioning.

Understanding Envoy

Ingress and Egress

Simple definitions:

  • Any traffic sent to the server (service) is called ingress.
  • Any traffic sent from the server (service) is called egress.

The Ingress and the Egress rules should be added to the configuration of the Envoy proxy, so the sidecar will take care of these. This means that any traffic to the service will first go to the Envoy sidecar. Then the Envoy proxy redirects the traffic to the real service. Vice-versa, any traffic from this service will go to the Envoy proxy first, and Envoy resolves the destination service using Service Discovery. By intercepting the inbound and outbound traffic, Envoy can implement service discovery, circuit breaker, rate limiting, etc.

The Structure of an Envoy Proxy Configuration File

Every Envoy configuration file has the following components:

  1. Listeners: where we configure the IP and the Port number that the Envoy proxy listens to
  2. Routes: the received request will be routed to a cluster based on rules. For example, we can have path matching rules and prefix rewrite rules to select the service that should handle a request for a specific path/subdomain. Actually, the route is just another type of filter, which is mandatory. Otherwise, the proxy doesn’t know where to route our request.
  3. Filters: Filters can be chained and are used to enforce different rules, such as rate-limiting, route mutation, manipulation of the requests, etc.
  4. Clusters: act as a manager for a group of logically similar services (the cluster has similar responsibility as a service in Kubernetes; it defines the way a service can be accessed), and acts as a load balancer between the services.
  5. Service/Host: the concrete service that handles and responds to the request

Here is an example of an Envoy configuration file:

---
admin:  
  access_log_path: "/tmp/admin_access.log"  
  address:     
    socket_address: 
      address: "127.0.0.1" 
      port_value: 9901
static_resources:   
  listeners:    
   -       
      name: "http_listener"      
      address:         
        socket_address:           
          address: "0.0.0.0"          
          port_value: 80      
      filter_chains:          
        filters:             
          -               
            name: "envoy.http_connection_manager"              
            config:                
              stat_prefix: "ingress"                
              codec_type: "AUTO"               
              generate_request_id: true                
              route_config:                   
                name: "local_route"                  
                virtual_hosts:                    
                  -                       
                    name: "http-route"                      
                    domains:                         
                      - "*"                      
                    routes:                       
                      -                           
                        match:                             
                          prefix: "/nestjs"                          
                        route:                            
                          prefix_rewrite: "/"                            
                          cluster: "nestjs"                        
                      -                           
                        match:                             
                            prefix: "/nodejs"                          
                          route:                            
                            prefix_rewrite: "/"                            
                            cluster: "nodejs"                         
                       -                           
                         match:                             
                           path: "/"                          
                         route:                            
                           cluster: "base"                
              http_filters:                  
                -                     
                  name: "envoy.router"                    
                  config: {}  

  clusters:    
    -       
      name: "base"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -           
          socket_address:             
            address: "service_1_envoy"            
            port_value: 8786        
        -           
          socket_address:             
            address: "service_2_envoy"            
            port_value: 8789        
    -      
      name: "nodejs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_4_envoy"            
            port_value: 8792        
    -      
      name: "nestjs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_5_envoy"            
            port_value: 8793

The configuration file above translates into the following diagram:

This diagram did not include all configuration files for all the services, but it is enough to understand the basics. You can find this code in my Stupid Simple Service Mesh repository.

As you can see, between lines 10-15 we defined the Listener for our Envoy proxy. Because we are working in Docker, the host is 0.0.0.0.

After configuring the listener, between lines 15-52 we define the Filters. For simplicity we used only the basic filters to match the routes and rewrite the target routes. In this case, if the subdomain is “host:port/nodeJs,” the router will choose the nodejs cluster and the URL will be rewritten to “host:port/” (this way the request for the concrete service won’t contain the /nodesJs part). The logic is the same also in the case of “host:port/nestJs”. If we don’t have a subdomain in the request, then the request will be routed to the cluster called base without a prefix rewrite filter.

Between lines 53-89 we defined the clusters. The base cluster will have two services; the chosen load-balancing strategy is round-robin. Other available strategies can be found here. The other two clusters (nodejs and nestjs) are simple, with only a single service.

The complete code for this tutorial can be found in my Stupid Simple Service Mesh git repository.

Conclusion

In this article, we learned about the basic concepts of Service Mesh. In the first part, we understood the responsibilities and differences between the Ingress Controller, API Gateway, and Service Mesh. Then we talked about what Service Mesh is and what problems it solves. In the second part, we introduced Envoy, a performant and popular proxy used to build our Service Mesh example. We learned about the different parts of the Envoy configuration files and created a Service Mesh with five example services and a front-facing edge proxy.

In the next article, we will look at how to use Service Mesh with Kubernetes and will create an example project that can be used as a starting point in any project using microservices.

There is another ongoing “Stupid Simple AI” series. The first two articles can be found here: SVM, Kernel SVM, and KNN in Python.

Thank you for reading this article!

Stupid Simple Open Source

Thursday, 26 August, 2021

Even if we don’t realize it, almost all of us have used open source software. When we buy a new Android phone, we read its specs and, usually, focus on the hardware capabilities, like CPU, RAM, camera, etc. The brains of these tools are their operating systems, which are open source software. The Android operating system powers more than 70 percent of mobile phones, demonstrating the prowess of open source software.

Before the free software movement, the first personal computer was hard to maintain and expensive; this wasn’t because of the hardware but the software. You could be the best programmer in the world, but without collaboration and knowledge sharing, your software creation will likely have issues: bugs, usability problems, design problems, performance issues, etc. What’s more, maintaining these products will cost time and money. Before the appearance of open source software, big companies believed they had to protect their intellectual property, so they kept the source code secret. They did not realize that letting people inspect their source codes and fix bugs would improve their software. Collaboration leads to great success.

What is Open Source Software?

Simply put, open source software has public source code, which can be seeninspectedmodifiedimproved or even sold by anyone. In contrast, non-open source, proprietary software has code that can be seen, modified and maintained only by a limited amount of people, a person, a team or an organization.

In both cases, the user must accept the licensing agreements. To use proprietary software, users must promise (typically by signing a license displayed the first time they run it) that they will not do anything with the software that its developers/owners have not explicitly authorized. Examples of proprietary software are the Windows operating system and Microsoft Office.

Users must accept the terms of a license when using open source software, just as they do when using proprietary software, but these terms are very different. Basically, you can do whatever you want as long as you include the original copyright and license notice in any copy of the software/source. Furthermore, these licenses usually state that the original creator cannot be liable for any harm or damage that the open source code may cause. This protects the creator of the open source code. Good examples of open source software are the Linux operating system, the Android operating system, LibreOffice and Kubernetes.

The Beginning of Open Source

Initially, software was developed by companies in-house. The creators controlled this software, with no right for the user to modify it, fix it or even inspect it. This also made collaboration between programmers difficult as knowledge sharing was near impossible.

In 1971, Richard Stallman joined the MIT Artificial Intelligence Lab. He noticed that most MIT developers were joining private corporations, which were not sharing knowledge with the outside world. He realized that this privacy and lack of collaboration would create a bigger gap between users and technical developers. According to Stallman, “software is meant to be free but in terms of accessibility and not price.” To fight against privatization, Stallman developed the GNU Project and then founded the Free Software Foundation (FSF). Many developers started using GNU in response to these initiatives, and many even fixed bugs they detected.

Stallman’s initiative was a success. Because he pushed against privatized software, more open source projects followed. The next big steps in open source software were the releases of Mozilla and the Linux operating system. Companies had begun to realize that open source might be the next big thing.

The Rise of Open Source

After the GNU, Mozilla, and Linux open source projects, more developers started to follow the open source movement. As the next big step in the history of open source, David Heinemeier Hansson introduced Ruby on Rails. This web application framework soon became one of the world’s most prominent web development tools. Popular platforms like Twitter would go on to use Ruby on Rails to develop their sites. When Sun Microsystems bought MySql for 1 billion dollars in 2008, it showed that open source could also be a real business, not just a beautiful idea.

Nowadays, big companies like IBM, Microsoft and Google embrace open source. So, why do these big companies give away their fearfully guarded source code? They realized the power of collaboration and knowledge sharing. They hoped that outside developers would improve the software as they adapted it to their needs. They realized that it is impossible to hire all the great developers of the world, and many developers are out there who could positively contribute to their product. It worked. Hundreds of outsiders collaborated on one of the most successful AI tools at Google, Tensorflow, which was a great success. Another success story is Microsoft’s open source .Net Core.

Why Would I Work on Open Source Projects?

Just think about it: how many times have open source solutions (libraries, frameworks, etc.) helped you in your daily job? How often did you finish your tasks earlier because you’d found a great open source, free tool that worked for you?

The most important reason to participate in the open source community is to help others and to give something back to the community. Open source has helped us a lot, shaping our world unprecedentedly. We may not realize it, but many of the products we are using currently result from open source.

In a modern world, collaboration and knowledge sharing are a must. Nowadays, inventions are rarely created by a single individual. Increasingly, they are made through collaboration with people from all around the world. Without the movement of free and open source software, our world would be completely different.  We’d live with isolated knowledge and isolated people, lots of small bubble worlds, and not a big, collaborative and helpful community (think about what you would do without StackOverflow?).

Another reason to participate is to gain real-world experience and technical upskilling. In the open source community, you can find all kinds of challenges that aren’t present in a single company or project. You can also earn recognition through problem-solving and helping developers with similar issues.

Finding Open Source Projects

If you would like to start contributing to the open source community, here are some places where you can find great projects:

CodeTriage: a website where you can find popular open source projects based on your programming language preferences. You’ll see popular open source projects like K8sTensorflowPandasScikit-LearnElasticsearch, etc.

awesome-for-beginners: a collection of Git repositories with beginner-friendly projects.

Open Source Friday: a movement to encourage people, companies and maintainers to contribute a few hours to open source software every Friday.

For more information about how to start contributing to open source projects, visit the newbie open source Git repository.

Conclusion

In the first part of this article, we briefly introduced open source. We described the main differences between open source and proprietary software and presented a brief history of the open source and free software movement.

In the second part, we presented the benefits of working on open source projects. In the last part, we gave instructions on how to start contributing to the open source community and how to find relevant projects.

Tags: Category: Cloud Computing, DevOps, Digital Transformation Comments closed

Harvester: Intro and Setup    

Tuesday, 17 August, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on “advanced” to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.