Is Cloud Native Development Worth It?    

Donnerstag, 18 November, 2021
The ‚digital transformation‘ revolution across industries enables businesses to develop and deploy applications faster and simplify the management of such applications in a cloud environment. These applications are designed to embrace new technological changes with flexibility.

The idea behind cloud native app development is to design applications that leverage the power of the cloud, take advantage of its ability to scale, and quickly recover in the event of infrastructure failure. Developers and architects are increasingly using a set of tools and design principles to support the development of modern applications that run on public, private, and hybrid cloud environments.

Cloud native applications are developed based on microservices architecture. At the core of the application’s architecture, small software modules, often known as microservices, are designed to execute different functions independently. This enables developers to make changes to a single microservice without affecting the entire application. Ultimately, this leads to a more flexible and faster application delivery adaptable to the cloud architecture.

Frequent changes and updates made to the infrastructure are possible thanks to containerization, virtualization, and several other aspects constituting the entire application development being cloud native. But the real question is, is cloud native application development worth it? Are there actual benefits achieved when enterprises adopt cloud native development strategies over the legacy technology infrastructure approach? In this article, we’ll dive deeper to compare the two.

Should  You Adopt a Cloud Native over Legacy Application Development Approach?

Cloud computing is becoming more popular among enterprises offering their technology solutions online. More tech-savvy enterprises are deploying game-changing technology solutions, and cloud native applications are helping them stay ahead of the competition. Here are some of the major feature comparisons of the two.

Speed

While customers operate in a fast-paced, innovative environment, frequent changes and improvements to the infrastructure are necessary to keep up with their expectations. To keep up with these developments, enterprises must have the proper structure and policies to conveniently improve or bring new products to market without compromising security and quality.

Applications built to embrace cloud native technology enjoy the speed at which their improvements are implemented in the production environment, thanks to the following features.

Microservices

Cloud native applications are built on microservices architecture. The application is broken down into a series of independent modules or services ,with each service consuming appropriate technology stack and data. Communication between modules is often done over APIs and message brokers.

Microservices frequently improve the code to add new features and functionality without interfering with the entire application infrastructure. Microservices‘ isolated nature makes it easier for new developers in the team to comprehend the code base and make contributions faster. This approach facilitates speed and flexibility at which improvements are being made to the infrastructure. In comparison,  an infrastructure consuming the monolithic architecture would slowly see new features and enhancements being pushed to production. Monolithic applications are complex and tightly coupled, meaning slight code changes must be harmonized to avoid failures. As a result, this slows down the deployment process.

CI/CD Automation Concepts

The speed at which applications are developed, deployed, and managed has primarily been attributed to adopting Continuous Integration and Continuous Development (CI/CD).

Improvement strategies include new code changes to the infrastructure through an automated checklist in a CI/CD pipeline and testing that application standards are met before being pushed to a production environment.

When implemented on cloud native applications architecture, CI/CD streamlines the entire development and deployment phases, shortening the time in which the new features are delivered to production.

Implementing CI/CD highly improves productivity in organizations to everyone’s benefit. Automated CI/CD pipelines make deployments predictable, freeing developers from repetitive tasks to focus on higher-value tasks.

On-demand infrastructure Scaling

Enterprises should opt for cloud native architecture over traditional application development approaches to easily provision computing resources to their infrastructure on demand.

Rather than having IT support applications based on estimates of what infrastructure resources are needed, the cloud native approach promotes automated provisioning of computing resources on demand.

This approach helps applications run smoothly by continuously monitoring the health of your infrastructure for workloads that would otherwise fail.

The cloud native development approach is based on orchestration technology that provides developers insights and control to scale the infrastructure to the organization’s liking. Let’s look at how the following features help achieve infrastructure scaling.

Containerization

Cloud native applications are built based on container technology where microservices, operating system libraries, and dependencies are bundled together to create single lightweight executables called container images.

These container images are stored in an online registry catalog for easy access by the runtime environment and developers making updates on them.

Microservices deployed as containers should be able to scale in and out, depending on the load spikes.

Containerization promotes portability by ensuring the executable packaging is uniform and runs consistently across the developer’s local and deployment environments.

Orchestration

Let’s talk orchestration in cloud native application development. Orchestration automates deploying, managing, and scaling microservice-based applications in containers.

Container orchestration tools communicate with user-created schedules (YAML, JSON files) to describe the desired state of your application. Once your application is deployed, the orchestration tool uses the defined specifications to manage the container throughout its lifecycle.

Auto-Scaling

Automating cloud native workflows ensures that the infrastructure automatically self-provisions itself when in need of resources. Health checks and auto-healing features are implemented in the infrastructure when under development to ensure that the infrastructure runs smoothly without manual intervention.

You are less likely to encounter service downtime because of this. Your infrastructure is automatically set to auto-detect an increase in workloads that would otherwise result in failure and automatically scales to a working machine.

Optimized Cost of Operation

Developing cloud native applications eliminates the need for hardware data centers that would otherwise sit idle at any given point. The cloud native architecture enables a pay-per-use service model where organizations only pay for the services they need to support their infrastructure.

Opting for a cloud native approach over a traditional legacy system optimizes the cost incurred that would otherwise go toward maintenance. These costs appear in areas such as scheduled security improvements, database maintenance, and managing frequent downtimes. This usually becomes a burden for the IT department and can be partially solved by migrating to the cloud.

Applications developed to leverage the cloud result in optimized costs allocated to infrastructure management while maximizing efficiency.

Ease of Management

Cloud native service providers have built-in features to manage and monitor your infrastructure effortlessly. A good example, in this case, is serverless platforms like AWS Lambda and  Azure Functions. These platforms help developers manage their workflows by providing an execution environment and managing the infrastructure’s dependencies.

This gets rid of uncertainty in dependencies version and configuration settings required to run the infrastructure. Developing applications that run on legacy systems requires developers to update and maintain the dependencies manually. Eventually, this becomes a complicated practice with no consistency. Instead, the cloud native approach makes collaborating easier without having the “This application works on my system but fails on another machine ” discussion.

Also, since the application is divided into smaller, manageable microservices, developers can easily focus on specific units without worrying about interactions between them.

Challenges

Unfortunately, there are challenges to ramping up users to adopt the new technology, especially for enterprises with long-standing legacy applications. This is often a result of infrastructure differences and complexities faced when trying to implement cloud solutions.

A perfect example to visualize this challenge would be assigning admin roles in Azure VMware solutions. The CloudAdmin role would typically create and manage workloads in your cloud, while in an Azure VMware Solution, the cloud admin role has privileges that conflict with the VMware cloud solutions and on-premises.

It is important to note that in the Azure VMware solution, the cloud admin does not have access to the administrator user account. This revokes the permission roles to add identity sources like on-premises servers to vCenter, making infrastructure role management complex.

Conclusion

Legacy vs. Cloud Native Application Development: What’s Best?

While legacy application development has always been the standard baseline structure of how applications are developed and maintained, the surge in computing demands pushed for the disruption of platforms to handle this better.

More enterprises are now adopting the cloud native structure that focuses on infrastructure improvement to maximize its full potential. Cloud native at scale is a growing trend that strives to reshape the core structure of how applications should be developed.

Cloud native application development should be adopted over the legacy structure to embrace growing technology trends.

Are you struggling with building applications for the cloud?  Watch our 4-week On Demand Academy class, Accelerate Dev Workloads. You’ll learn how to develop cloud native applications easier and faster.

Introduction to Cloud Native Application Architecture    

Mittwoch, 17 November, 2021
Today, it is crucial that an organization’s application’s scalability matches its growth tempo. If you want your client’s app to be robust and easy to scale, you have to make the right architectural decisions.

Cloud native applications are proven more efficient than their traditional counterparts and much easier to scale due to containerization and running in the cloud.

In this blog, we’ll talk about what cloud native applications are and what benefits this architecture brings to real projects.

What is Cloud Native Application Architecture?

Cloud native is an approach to building and running apps that use the cloud. In layman’s terms, companies that use cloud native architecture are more likely to create new ideas, understand market trends and respond faster to their customers’ requests.

Cloud native applications are tied to the underlying infrastructure needed to support them. Today, this means deploying microservices through containers to dynamically provision resources according to user needs.

Each microservice can independently receive and transmit data through the service-level APIs. Although not required for an application to be considered „cloud native“ due to modularity, portability, and granular resource management, microservices are a natural fit for running applications in the cloud.

Scheme of Cloud Native Application

Cloud native application architecture consists of frontend and backend. 

  • The client-side or frontend is the application interface available for the end-user. It has protocols and ports configured for user-database access and interaction. An example of this is a web browser. 
  • The server-side or backend refers to the cloud itself. It consists of resources providing cloud computing services. It includes everything you need, like data storage, security, and virtual machines.

All applications hosted on the backend cloud server are protected due to built-in engine security, traffic management, and protocols. These protocols are intermediaries, or middleware, for establishing successful communication with each other.

What Are the Core Design Principles of Cloud Native Architecture?

To create and use cloud native applications, organizations need to rethink the approach to the development system and implement the fundamental principles of cloud native.

DevOps

DevOps is a cultural framework and environment in which software is created, tested, and released faster, more frequently, and consistently. DevOps practices allow developers to shorten software development cycles without compromising on quality.

CI/CD

Continuous integration (CI) is the automation of code change integration when numerous contributions are made to the same project. CI is considered one of the main best practices of DevOps culture because it allows developers to merge code more frequently into the central repository, where they are subject to builds and tests.

Continuous delivery (CD) is the process of constantly releasing updates, often through automated delivery. Continuous delivery makes the software release process reliable, and organizations can quickly deliver individual updates, features, or entire products.

Microservices

Microservices are an architectural approach to developing an application as a collection of small services; each service implements a business opportunity, starts its process, and communicates through its own API.

Each microservice can be deployed, upgraded, scaled, and restarted independently of other services in the same application, usually as part of an automated system, allowing frequent updates to live applications without impacting customers.

Containerization

Containerization is a software virtualization technique conducted at the operating system level and ensures the minimum use of resources required for the application’s launch.

Using virtualization at the operating system level, a single OS instance is dynamically partitioned into one or more isolated containers, each with a unique writeable file system and resource quota.

The low overhead of creating and deleting containers and the high packing density in a single VM make containers an ideal computational tool for deploying individual microservices.

Benefits of Cloud Native Architecture

Cloud native applications are built and deployed quickly by small teams of experts on platforms that provide easy scalability and hardware decoupling. This approach provides organizations greater flexibility, resiliency, and portability in cloud environments.

Strong Competitive Advantage

Cloud-based development is a transition to a new competitive environment with many convenient tools, no capital investment, and the ability to manage resources in minutes. Companies that can quickly create and deliver software to meet customer needs are more successful in the software age.

Increased Resilience

Cloud native development allows you to focus on resilience tools. The rapidly evolving cloud landscape helps developers and architects design systems that remain interactive regardless of environment freezes.

Improved Flexibility

Cloud systems allow you to quickly and efficiently manage the resources required to develop applications. Implementing a hybrid or multi-cloud environment will enable developers to use different infrastructures to meet business needs.

Streamlined Automation and Transformation

The automation of IT management inside the enterprise is a springboard for the effective transformation of other departments and teams.

In addition, it eliminates the risk of disruption due to human error as employees focus on controlling routine tasks rather than performing them directly.

Automated real-time patches and updates across all stack levels eliminate downtime and the need for operational experts with “manual management” expertise.

Comparison: Cloud Native Architecture vs. Legacy Architecture

The capabilities of the cloud allow both traditional monolithic applications and data operations to be transferred to it. However, many enterprises prefer to invest in a cloud native architecture from the start. Here is why:

Separation of Computation and Data Storage Improves Scalability

Datacenter servers are usually connected to direct-attached storage (DAS), which an enterprise can use to store temporary files, images, documents, or other purposes.

Relying on this model is dangerous because its processing power needs can rise and fall in very different ways than storage needs. The cloud enables object storage such as AWS S3 or ADLS, which can be purchased, optimized, and managed separately from computing requirements.

This way, you can easily add thousands of new users or expand the app’s functionality.

Cloud Object Storage Gives Better Adaptability

Cloud providers are under competitive pressure to improve and innovate in their storage services. Application architects who monitor closely and quickly adapt to these innovations will have an edge over competitors who have taken a wait-and-see attitude.

Alongside proprietary solutions, there are also many open source, cloud computing software projects like Rancher.

This container management platform provides users with a complete software stack that facilitates Kubernetes cluster management in a private or public cloud.

Cloud Native Architecture is More Reliable

The obvious advantage for those companies that have adopted a native cloud approach is the focus on agility, automation, and simplification.

For complex IT or business functions, their survival depends on the level of elaboration of their services. On the other hand, you need error protection to improve user productivity through increased levels of automation, built-in predictive intelligence, or machine learning to help keep your environment running optimally.

Cloud Native Architecture Makes Inter-Cloud Migration Easy

Every cloud provider has its cloud services (e.g., data warehousing, ETL, messaging) and provides a rich set of ready-made open source tools such as Spark, Kafka, MySQL, and many others.

While it sounds bold to say that using open source solutions makes it easy to move from one cloud to another, if cloud providers offer migration options, you won’t have to rewrite a significant part of the existing functionality.

Moreover, many IT architects see the future in the multi-cloud model, as many companies already deal with two or more cloud providers.

If your organization can skillfully use cloud services from different vendors, then the ability to determine the advantage of one cloud over another is good groundwork for the future justification of your decision.

Conclusion

Cloud native application architecture provides many benefits. This approach automates and integrates the concepts of continuous delivery, microservices, and containers for enhanced quality and streamlined delivery.

Applications that are built as cloud native can offer virtually unlimited computing power on demand. That’s why more and more developers today are choosing to build and run their applications as cloud native.

Want to make sure you don’t miss any of the action? Join the SUSE & Rancher Community to get updates on new content coming your way!

Refactoring Isn’t the Same for All    

Dienstag, 9 November, 2021

Cloud Native: it’s been an industry buzzword for a few years now. It holds different meanings for different people, and even then a different context. While we have overused this word, it does have a place when it comes to modernizing applications.

To set the context here, we are talking about apps you would build in the cloud rather than for it. This means these apps, if modernized, would run in a cloud platform. In this post, we will discuss how „refactoring,“ as Gartner puts it, isn’t the same for every app.

When we look at legacy applications sitting in data centers across the globe, some are traditional mainframes; others are „Custom off the Shelf Software“ (CotS). We care about the business-critical apps we can leverage for the cloud. Some of these are CotS, and many of these applications are custom.

When it comes to the CotS, companies should rely on the vendor to modernize their CotS to a cloud platform. This is the vendor’s role, and there is little business value in a company doing it for them.

Gartner came up with the five R’s: Rehost, Refactor, Revise, Rebuild and Replace. But when we look at refactoring, it shouldn’t be the same for every app because not all apps are the same. Some are mission-critical; most of your company’s revenue is made with those apps. Some apps are used once a month to make accounting’s life easier. Both might need to be refactored, but not to the same level. When you refactor, you change the structure, architecture, and business logic. All to leverage core concepts and features of a cloud. This is why we break down refactoring into Scale of Cloud Native.

Custom apps are perfect candidates for modernization. With every custom app, modernization brings risks and rewards. Most systems depend on other technologies like libraries, subsystems, and even frameworks. Some of these dependencies are easy to modernize into a cloud platform, but not all are like this. Some pose considerable challenges that limit how much you can modernize.

If we look at what makes an app cloud native, we first have to acknowledge that this term means something different depending on who you ask; however, most of these concepts are at least somewhat universal. Some of these concepts are:

  • Configuration
  • Disposability
  • Isolation
  • Scalability
  • Logs

Outside of technical limitations, there’s the question of how much an application should be modernized. Do you go all in and rewrite an app to be fully cloud native? Or do you do the bare minimum to get the app to run in the cloud?

We delineate these levels of cloud native as Suitable, Compatible, Durable, and Native. These concepts build upon one another so that an app can be Compatible and, with some refactoring, can go to Durable.

What does all this actually mean? Well, let’s break them down based on a scale:

  • Suitable – First on the scale and the bare minimum you need to get your app running in your cloud platform. This could just be the containerization of the application, or that and a little more.
  • Compatible – Leveraging a few of the core concepts of the cloud. An app that is cloud-compatible leverages things like environmental configs and disposability. This is a step further than Suitable.
  • Durable – At this point, apps should be able to handle a failure in the system and not let it cascade, meaning the app can handle it when some underlying services are unavailable. Being Durable also means the app can start up fast and shut down gracefully. These apps are well beyond Suitable and Compatible.
  • Native – These apps leverage most, if not all, of the cloud native core concepts. Generally, this is done with brand-new apps being written in the cloud. It might not make sense to modernize an existing app to this level.

This scale isn’t absolute; as such, different organizations may use different scales. A scale is important to ensure you are not over or under-modernizing an app.

When starting any modernization effort, collectively set the scale. This should be done organizationally rather than team-by-team. When it comes to budget and timing, making sure that all teams use the same scale is critical.

Learn more about this in our Webinar, App Modernization: When and How Far to Modernize. Watch the replay, Register here. 

Want to make sure you don’t miss any of the action? Join the SUSE & Rancher Community to get updates on new content coming your way!

Stupid Simple Service Mesh: What, When, Why Part 1

Donnerstag, 26 August, 2021

Recently microservices-based applications became very popular, and with the rise of microservices, the concept of Service Mesh also became a very hot topic. Unfortunately, there are only a few articles about this concept and most of them are hard to digest.

In this blog, we will try to demystify the concept of Service Mesh using „Stupid Simple“ explanations, diagrams, and examples to make this concept more transparent and accessible for everyone. In the first article, we will discuss the basic building blocks of a Service Mesh and implement a sample application to have a practical example of each theoretical concept. In the next articles, based on this sample app, we will touch on more advanced topics, like Service Mesh in Kubernetes, and we will talk about some more advanced Service Mesh implementations like IstioLinkerd, etc.

To understand the concept of Service Mesh, the first step is to understand what problems it solves and how it solves them.

Software architecture has evolved a lot in a short time, from classical monolithic architecture to microservices. Although many praise microservice architecture as the holy grail of software development, it introduces some serious challenges.

Overview of the sample application

For one, a microservices-based architecture means that we have a distributed system. Every distributed system has challenges such as transparencysecurityscalabilitytroubleshooting, and identifying the root cause of issues. In a monolithic system, we can find the root cause of a failure by tracing. But in a microservice-based system, each service can be written in different languages, so tracing is no trivial task. Another challenge is service-to-service communication. Instead of focusing on business logic, developers must take care of service discovery, handle connection errors, detect latency, and retry logic. Applying SOLID principles on the architecture level means that these kinds of network problems should be abstracted away and not mixed with the business logic. This is why we need Service Mesh.

Ingress Controller vs. API Gateway vs. Service Mesh

As I mentioned above, we need to apply SOLID principles on an architectural level. For this, it is important to set the boundaries between Ingress Controller, API Gateway, and Service Mesh and understand each one’s role and responsibility.

On a stupidly oversimplified level, these are the responsibilities of each concept:

  1. Ingress Controller: allows a single IP port to access all services from the cluster, so its main responsibilities are path mapping, routing and simple load balancing, like a reverse proxy
  2. API Gatewayaggregates and abstracts away APIs; other responsibilities are rate-limiting, authentication, and security, tracing, etc. In a microservices-based application, you need a way to distribute the requests to different services, gather the responses from multiple/all microservices, and then prepare the final response to be sent to the caller. This is what an API Gateway is meant to do. It is responsible for client-to-service communication, north-south traffic.
  3. Service Mesh: responsible for service-to-service communication, east-west traffic. We’ll dig more into the concept of Service Mesh in the next section.

Service Mesh and API Gateway have overlapping functionalities, such as rate limiting, security, service discovery, tracing, etc. but they work on different levels and solve different problems. Service Mesh is responsible for the flow of requests between services. API Gateway is responsible for the flow of requests between the client and the services, aggregating multiple services and creating and sending the final response to the client.

The main responsibility of an API gateway is to accept traffic from outside your network and distribute it internally, while the main responsibility of a service mesh is to route and manage traffic within your network. They are complementary concepts, and a well-defined microservices-based system should combine them to ensure application uptime and resiliency while ensuring that your applications are easily consumable.

What Does a Service Mesh Solve?

As an oversimplified and stupid simple definition, a Service Mesh is an abstraction layer hiding away and separating networking-related logic from business logic. This way developers can focus only on implementing business logic. We implement this abstraction using a proxy, which sits in the front of the service. It takes care of all the network-related problems. This allows the service to focus on what is really important: business logic. In a microservice-based architecture, we have multiple services, each with a proxy. Together, these proxies are called Service Mesh.

As best practices suggest, proxy and service should be in separate containers, so each container has a single responsibility. In the world of Kubernetes, the container of the proxy is implemented as a sidecar. This means that each service has a sidecar containing the proxy. A single Pod will contain two containers: the service and the sidecar. Another implementation is to use one proxy for multiple pods. In this case, the proxy can be implemented as a Deamonset. The most common solution is using sidecars. Personally, I prefer sidecars over Deamonsets because they keep the logic of the proxy as simple as possible.

Multiple Service Mesh solutions exist, including IstioLinkerdConsulKong, and Cilium. (We will talk about these solutions in a later article.) Let’s focus on the basics and understand the concept of Service Mesh, starting with Envoy. This is a high-performance proxy and not a complete framework or solution for Service Meshes (in this tutorial, we will build our own Service Mesh solution). Some of the Service Mesh solutions use Envoy in the background (like Istio), so before starting with these higher-level solutions, it’s a good idea to understand the low-level functioning.

Understanding Envoy

Ingress and Egress

Simple definitions:

  • Any traffic sent to the server (service) is called ingress.
  • Any traffic sent from the server (service) is called egress.

The Ingress and the Egress rules should be added to the configuration of the Envoy proxy, so the sidecar will take care of these. This means that any traffic to the service will first go to the Envoy sidecar. Then the Envoy proxy redirects the traffic to the real service. Vice-versa, any traffic from this service will go to the Envoy proxy first, and Envoy resolves the destination service using Service Discovery. By intercepting the inbound and outbound traffic, Envoy can implement service discovery, circuit breaker, rate limiting, etc.

The Structure of an Envoy Proxy Configuration File

Every Envoy configuration file has the following components:

  1. Listeners: where we configure the IP and the Port number that the Envoy proxy listens to
  2. Routes: the received request will be routed to a cluster based on rules. For example, we can have path matching rules and prefix rewrite rules to select the service that should handle a request for a specific path/subdomain. Actually, the route is just another type of filter, which is mandatory. Otherwise, the proxy doesn’t know where to route our request.
  3. Filters: Filters can be chained and are used to enforce different rules, such as rate-limiting, route mutation, manipulation of the requests, etc.
  4. Clusters: act as a manager for a group of logically similar services (the cluster has similar responsibility as a service in Kubernetes; it defines the way a service can be accessed), and acts as a load balancer between the services.
  5. Service/Host: the concrete service that handles and responds to the request

Here is an example of an Envoy configuration file:

---
admin:  
  access_log_path: "/tmp/admin_access.log"  
  address:     
    socket_address: 
      address: "127.0.0.1" 
      port_value: 9901
static_resources:   
  listeners:    
   -       
      name: "http_listener"      
      address:         
        socket_address:           
          address: "0.0.0.0"          
          port_value: 80      
      filter_chains:          
        filters:             
          -               
            name: "envoy.http_connection_manager"              
            config:                
              stat_prefix: "ingress"                
              codec_type: "AUTO"               
              generate_request_id: true                
              route_config:                   
                name: "local_route"                  
                virtual_hosts:                    
                  -                       
                    name: "http-route"                      
                    domains:                         
                      - "*"                      
                    routes:                       
                      -                           
                        match:                             
                          prefix: "/nestjs"                          
                        route:                            
                          prefix_rewrite: "/"                            
                          cluster: "nestjs"                        
                      -                           
                        match:                             
                            prefix: "/nodejs"                          
                          route:                            
                            prefix_rewrite: "/"                            
                            cluster: "nodejs"                         
                       -                           
                         match:                             
                           path: "/"                          
                         route:                            
                           cluster: "base"                
              http_filters:                  
                -                     
                  name: "envoy.router"                    
                  config: {}  

  clusters:    
    -       
      name: "base"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -           
          socket_address:             
            address: "service_1_envoy"            
            port_value: 8786        
        -           
          socket_address:             
            address: "service_2_envoy"            
            port_value: 8789        
    -      
      name: "nodejs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_4_envoy"            
            port_value: 8792        
    -      
      name: "nestjs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_5_envoy"            
            port_value: 8793

The configuration file above translates into the following diagram:

This diagram did not include all configuration files for all the services, but it is enough to understand the basics. You can find this code in my Stupid Simple Service Mesh repository.

As you can see, between lines 10-15 we defined the Listener for our Envoy proxy. Because we are working in Docker, the host is 0.0.0.0.

After configuring the listener, between lines 15-52 we define the Filters. For simplicity we used only the basic filters to match the routes and rewrite the target routes. In this case, if the subdomain is „host:port/nodeJs,“ the router will choose the nodejs cluster and the URL will be rewritten to „host:port/“ (this way the request for the concrete service won’t contain the /nodesJs part). The logic is the same also in the case of „host:port/nestJs“. If we don’t have a subdomain in the request, then the request will be routed to the cluster called base without a prefix rewrite filter.

Between lines 53-89 we defined the clusters. The base cluster will have two services; the chosen load-balancing strategy is round-robin. Other available strategies can be found here. The other two clusters (nodejs and nestjs) are simple, with only a single service.

The complete code for this tutorial can be found in my Stupid Simple Service Mesh git repository.

Conclusion

In this article, we learned about the basic concepts of Service Mesh. In the first part, we understood the responsibilities and differences between the Ingress Controller, API Gateway, and Service Mesh. Then we talked about what Service Mesh is and what problems it solves. In the second part, we introduced Envoy, a performant and popular proxy used to build our Service Mesh example. We learned about the different parts of the Envoy configuration files and created a Service Mesh with five example services and a front-facing edge proxy.

In the next article, we will look at how to use Service Mesh with Kubernetes and will create an example project that can be used as a starting point in any project using microservices.

There is another ongoing “Stupid Simple AI” series. The first two articles can be found here: SVM, Kernel SVM, and KNN in Python.

Thank you for reading this article!

Stupid Simple Open Source

Donnerstag, 26 August, 2021

Even if we don’t realize it, almost all of us have used open source software. When we buy a new Android phone, we read its specs and, usually, focus on the hardware capabilities, like CPU, RAM, camera, etc. The brains of these tools are their operating systems, which are open source software. The Android operating system powers more than 70 percent of mobile phones, demonstrating the prowess of open source software.

Before the free software movement, the first personal computer was hard to maintain and expensive; this wasn’t because of the hardware but the software. You could be the best programmer in the world, but without collaboration and knowledge sharing, your software creation will likely have issues: bugs, usability problems, design problems, performance issues, etc. What’s more, maintaining these products will cost time and money. Before the appearance of open source software, big companies believed they had to protect their intellectual property, so they kept the source code secret. They did not realize that letting people inspect their source codes and fix bugs would improve their software. Collaboration leads to great success.

What is Open Source Software?

Simply put, open source software has public source code, which can be seeninspectedmodifiedimproved or even sold by anyone. In contrast, non-open source, proprietary software has code that can be seen, modified and maintained only by a limited amount of people, a person, a team or an organization.

In both cases, the user must accept the licensing agreements. To use proprietary software, users must promise (typically by signing a license displayed the first time they run it) that they will not do anything with the software that its developers/owners have not explicitly authorized. Examples of proprietary software are the Windows operating system and Microsoft Office.

Users must accept the terms of a license when using open source software, just as they do when using proprietary software, but these terms are very different. Basically, you can do whatever you want as long as you include the original copyright and license notice in any copy of the software/source. Furthermore, these licenses usually state that the original creator cannot be liable for any harm or damage that the open source code may cause. This protects the creator of the open source code. Good examples of open source software are the Linux operating system, the Android operating system, LibreOffice and Kubernetes.

The Beginning of Open Source

Initially, software was developed by companies in-house. The creators controlled this software, with no right for the user to modify it, fix it or even inspect it. This also made collaboration between programmers difficult as knowledge sharing was near impossible.

In 1971, Richard Stallman joined the MIT Artificial Intelligence Lab. He noticed that most MIT developers were joining private corporations, which were not sharing knowledge with the outside world. He realized that this privacy and lack of collaboration would create a bigger gap between users and technical developers. According to Stallman, „software is meant to be free but in terms of accessibility and not price.“ To fight against privatization, Stallman developed the GNU Project and then founded the Free Software Foundation (FSF). Many developers started using GNU in response to these initiatives, and many even fixed bugs they detected.

Stallman’s initiative was a success. Because he pushed against privatized software, more open source projects followed. The next big steps in open source software were the releases of Mozilla and the Linux operating system. Companies had begun to realize that open source might be the next big thing.

The Rise of Open Source

After the GNU, Mozilla, and Linux open source projects, more developers started to follow the open source movement. As the next big step in the history of open source, David Heinemeier Hansson introduced Ruby on Rails. This web application framework soon became one of the world’s most prominent web development tools. Popular platforms like Twitter would go on to use Ruby on Rails to develop their sites. When Sun Microsystems bought MySql for 1 billion dollars in 2008, it showed that open source could also be a real business, not just a beautiful idea.

Nowadays, big companies like IBM, Microsoft and Google embrace open source. So, why do these big companies give away their fearfully guarded source code? They realized the power of collaboration and knowledge sharing. They hoped that outside developers would improve the software as they adapted it to their needs. They realized that it is impossible to hire all the great developers of the world, and many developers are out there who could positively contribute to their product. It worked. Hundreds of outsiders collaborated on one of the most successful AI tools at Google, Tensorflow, which was a great success. Another success story is Microsoft’s open source .Net Core.

Why Would I Work on Open Source Projects?

Just think about it: how many times have open source solutions (libraries, frameworks, etc.) helped you in your daily job? How often did you finish your tasks earlier because you’d found a great open source, free tool that worked for you?

The most important reason to participate in the open source community is to help others and to give something back to the community. Open source has helped us a lot, shaping our world unprecedentedly. We may not realize it, but many of the products we are using currently result from open source.

In a modern world, collaboration and knowledge sharing are a must. Nowadays, inventions are rarely created by a single individual. Increasingly, they are made through collaboration with people from all around the world. Without the movement of free and open source software, our world would be completely different.  We’d live with isolated knowledge and isolated people, lots of small bubble worlds, and not a big, collaborative and helpful community (think about what you would do without StackOverflow?).

Another reason to participate is to gain real-world experience and technical upskilling. In the open source community, you can find all kinds of challenges that aren’t present in a single company or project. You can also earn recognition through problem-solving and helping developers with similar issues.

Finding Open Source Projects

If you would like to start contributing to the open source community, here are some places where you can find great projects:

CodeTriage: a website where you can find popular open source projects based on your programming language preferences. You’ll see popular open source projects like K8sTensorflowPandasScikit-LearnElasticsearch, etc.

awesome-for-beginners: a collection of Git repositories with beginner-friendly projects.

Open Source Friday: a movement to encourage people, companies and maintainers to contribute a few hours to open source software every Friday.

For more information about how to start contributing to open source projects, visit the newbie open source Git repository.

Conclusion

In the first part of this article, we briefly introduced open source. We described the main differences between open source and proprietary software and presented a brief history of the open source and free software movement.

In the second part, we presented the benefits of working on open source projects. In the last part, we gave instructions on how to start contributing to the open source community and how to find relevant projects.

Tags: Category: Cloud Computing, DevOps, Digital Transformation Comments closed

Harvester: Intro and Setup    

Dienstag, 17 August, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on „advanced“ to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.

Octopod Episode 1: What is an Open Source Community?

Sonntag, 1 August, 2021

In Episode 1 of the OCTOpod, Alan Clark talks with Thierry Carrez about open source communities: what they are, how they work and how you can get involved.

Trying to define what an open source community is might sound like a simple task, but it is a layered, nuanced collective with many moving parts. Thierry has been in the open source community for years and is currently the VP of engineering at the Open Infrastructure Foundation. In this episode, Thierry sheds light on some of the key traits that characterize open source communities. We hear about the importance of governance, principles, scope and documentation and find out how everyone, even those who do not code, can contribute. As Thierry notes, it is not about your technical ability, but rather about adding value where you can and being an engaged member of a community. Building a sustainable community requires effort, but that transparency and collaboration make it a worthwhile endeavor.

“It’s really not about code, it’s really not about being a technical rock star. It is really more about being useful to others.”

Listen to the OCTOpod here or subscribe on your favorite podcast platform! And please share it with your friends!

Image 01

Here’s the full transcript:

EPISODE 01

[INTRODUCTION]

AC: I am Alan Clark. I have spent my career in enterprise software with the focus on open source advocacy and emerging tech. These days, I’m a member of the SUSE Office of the CTO, that is OCTO for short. Welcome to our new podcast series, The OCTOPod.

Season one is all about open source. I love being part of open-source communities. I’ve contributed in many ways, from code to chairs, for networking, to cloud. This includes serving as chairman of the board for the Open Infrastructure Foundation, the Linux foundation board of directors, Open SUSE chair, open mainframe project board and many more. I’ve met so many great people along the way.

In season one, I’ll sit down with a few of these experts. We’ll talk about the latest trends and challenges in open source, including findings from our latest report on why IT leaders choose open. We’ll talk about how to manage a community, the importance of diversity and inclusion in open source and much more.

Join me on your favorite podcast platform or at community.suse.com.

[INTERVIEW]

AC: Hello everyone, welcome to the OCTOPod. Today, I’m excited to sit down with Thierry Carrez, someone that I’ve known in the open source community for many years. We’ve worked together for a long time. He is currently the VP of engineering at the Open Infrastructure Foundation.

Thanks for being here today, Thierry. We want to get started here with some questions and we want to talk a little bit about just the basics of open source and open source communities, how they get started, what they’re like and so forth. Just to get people a flavor of how they kind of operate. Let’s start with the real basic question. What exactly is an open source community and what is it not?

TC: Thank you Alan, it’s great to be here. It sounds like a basic question but it’s actually a complex question. An open source community at the very bottom is all the people who contributes to an open source project but obviously, that just kicks the can down the road and now the question is, what is a contribution?

The traditional sense, the contribution to an open source project would be code and code patches but that quickly extended to non-code activities like documentation or user experience studies or walking on the continuous integration for the project and that’s as a result of more using it for tracking everything, not just code but also documentation and other types of documents and infrastructures to us code and those things but sharing your experience is also a form of contribution.

In the end, the community extends to all the users who publicly engage and share their experience and so in the end, the community is all the people who actively engage with the open source project and help it.

Obviously, that definition is working well for open leader block projects where anyone can engage with the project, it works less well for single vendor or open source where the makers are more separated from the consumers and in that case, they call community like more their extended circle of users and conference attendees so it’s not exactly the same meaning as what we call community in a more openly developed project.

AC: That’s a good points. I want to come back to that one because I think that’s a very good point and want to delve into that a little bit but let’s start at a different angle a little bit here. What is it that you see that brings people to participate in this communities? As you mentioned, there’s a lot of different types of contributions which means, we have a lot of different types of backgrounds and experience and interest. What is it that brings people to come and participate in a community?

TC: I would say they are like two categories of things. There is more classic altruistic motivation like giving back to the project that you’re using or participating in cultivating the commons for resource that you are benefiting from but more and more, we are seeing business sense in the form of shared innovation like a multiple organizations, putting resources together so that they don’t waste energy or inventing the wheel separately, that’s what we saw with the open stack project.

A number of organizations coming together because walking on the same body of code and software in common was better than walking on it separately. For any type of complex technology, if you can join a group of experts having the same kind of issues, you learn a lot from it so that makes complete business sense to engage with the community when you’re tackling a complex problem or seeing it for example, with the large scale [SIG 0:05:21.2] within open stack where several operators of large scale clouds get together to share their experiences, obviously, the project benefits from it because we learn from their experience but they also learn from one another and they see benefit in sharing their experience in that group.

It’s really a complex set of motivations but at the bottom, it’s either altruistic based on your usage and wanting to give back or it makes business sense, which is much more sustainable by the way because then it’s a win-win. If everyone wins, there is no sense of the project benefits from having those organizations involved and this organizations see the value of contributing.

AC: Yeah, that makes sense, right? I have to – reminiscing here a little bit, I remember one of the first times I met you, I walked in to I think it was a Nova project meeting, right? Where this was years ago and it was a planning meeting, so planning for the next six months kind of notion.

I was just overwhelmed with the number of people that were in the room at the time. I wouldn’t even dare count but there had to be hundreds of people in that room interested in wanting to participate and contribute to that project.

I remember sitting there to that point, I’ve worked in open source for a long, long time but I had never worked in a project with that many people involved. I was extremely impressed with how you handled the group, able to hear all the voices in the room and enable people to contribute and participate, this is the interesting part that I wanted to ask you.

How does an open-source community work, particularly when you have a large group of people that want to participate? What are the rules, how do you set rules of engagement, so forth, that enables these people to participate, feel like they can participate and contribute and yet when you have a very large group like that, how do you get anything done?

TC: It’s a complex question.

AC: I know it’s a very complex question, I apologize. I might have to break that one down but I was just so impressed because work happened, right? I was totally impressed with how much work was able to get done and how much people – even new people were able to come in and participate in the project.

TC: You have to balance a number of structural elements and allow for a lot of flexibility. Essentially, you have to provide a structure where people would be able to share and at the same time, make it very welcoming so that people feel like they can engage and at the same time, giving – still having a lot of flexibility in terms of the topics that are being discussed or the next steps.

The way we’ve been doing it in our design summits, which we’re referring to the event that you mentioned earlier, those designs summit, the idea was to have anyone be able to join and informs the future of the software and it was based on even to developer summits originally and then we perfected the idea in open stack where we have a theme that is being discussed so there is a first a call for organizing the themes and then every 40 minutes we would switch or 50 minutes we would switch.

During that time, we would discuss, openly discuss that topic with either pads to take notes and fish bowl-type setting where people that are most involved in the discussion sit in the middle but at the same time, you can have like extending circles of people depending on how much they want to get involved and people move in the room and get more involved as the discussion goes.

That provides this structure in which people feel free to communicate and at the same time, a lot of flexibility as to where the discussion goes, that helps with getting that setup. As you said, it’s probably a problem you have once you reach a certain size. In terms of rules of engagement or principles or charters that you have to predefine before you start, I would say, you need three things.

The first one is really to define the scope, what is the problem space your project wants to address and make that very clear from their zero because without scope, you’re really exposed to scope creep and that might – Ultimately, that lack of focus might ultimately kill your projects.

It’s actually one thing we didn’t do well in open stack which is to set a very aggressive scope so that we don’t – just because we are a community, it doesn’t mean that we should address every problem earth practically.

The second one is like the big principles, the big 10x that you want your community to follow and write those down so that it’s really clear to whoever joins the community, what they’re signing up for and finally, governance, which describes how decisions are made, governance is really needed in any social group like absence of rules is in itself a form of governance called anarchy and there is the benevolent dictator model where all decisions go up to one person.

You need to define the governance and you need to do it before any problems arise because if you wait for the problem to arrive to have the rule on how to solve it then it’s a bit too late.

AC: Too late, isn’t it?

TC: People will discuss forever. They can be simple but in the end, it really needs to be clear where the bucket stops and avoid gray areas and what we’ve seen in OpenStack at least and in other projects since is that usually, writing things down in advance avoids the situation that the rule is designed to address.

Sometimes just saying, “Well, this doesn’t get solved at that level, this gets escalated to that level for resolution” then it forces in a way, the first group to come to terms and not escalate because they don’t want to escalate basically. They don’t want the situation out of their hands. They usually work it out between themselves without needing to call out for the upper governance body.

AC: Cool, thank you, that was good. Hey, so, we’re going to run out of time here pretty quick but I wanted to get in for this audience, we have a lot of folks that have not participated in a community and they aren’t sure how to get started, right? It can be very intimidating.

Just maybe very basically, how can someone that perhaps hasn’t been involved with open source in the past and their interest maybe some of those contributions that you talked about earlier that are things beyond writing code? If someone’s time is somewhat limited, can they get involved in a community? How would they begin?

TC: We touched on that earlier when we discussed what is a community but even if you don’t write code or if your time is limited, you can definitely participate in and be part of a community, especially like just joining the conference and participating in the discussions and finding a presentation or those are all contributions that are extremely worthwhile because otherwise, you end up with the same speakers every conference, those who are comfortable speaking.

It is really good to have that people feel empowered to do that and teaches like documentation, people that use project, they’re probably not as issues with the documentation as they first try to run it, so talking on documentation is really an easy way to get involved. Showing your experience like I said, we have this example recently where we have interns on the outreach program in open stack where we pair them with a mentor and experienced developer and they walk together on some specific project.

One that outreaching intern did is documenting her full experience of this onboarding on blog posts but also on TikTok posts or on every social media and it was extremely useful for us to hear how difficult or how easy it is to pass some of the hurdles that we throw our newest contributors on. Even that like doing a quick write up of how you handle those first step of contributions is extremely valuable to a project. There shouldn’t be like extremely high expectations and the bar is not high. Even the simplest contribution, just hearing it from a diversity of perspective is really useful.

AC: Okay, that’s cool. One last topic before we have to go here and this one maybe too deep, might have to deal on another day but I thought it would be interesting because I know you’ve joined or started projects from the beginning, right? If I have something that I think would be very interesting to start an open source community or start a new project in a community, is that something that somebody should be able to do today? Any advice on how someone would start a new project?

TC: Yeah, sure.

AC: Like I said, that is a big question, isn’t it?

TC: Yeah, that’s like the topic for a whole new episode but I’ll try to make it quick. In terms of creation, I would say today it’s really easy to setup an open source project compared to even ten years ago. It’s really easy to setup shop, you just take a force like GitHub or GitLab or OpenDev that we are using for OpenStack and so it is easy to do it like whether you should do it or not is another good topic.

AC: A whole question, isn’t it?

TC: Yeah, I guess the key question is whether several people or several organizations have the same issue and would benefit from sharing the solution because ultimately for me, the interest in having an open source project is ultimately to avoid the waste of having several parties develop the same thing proprietary on their side while they could collaborate and contribute and avoid wasting that energy by doing it as a collaborative project in open source.

Which is actually why I’m so motivated by open lead develop open source because I don’t really see the point of open source that is owned by a single body because then, you don’t really have that collaboration that reduces waste. It is just one way to do proprietary software where you just publish a code and have some free labor in and on the side. Ultimately, for me what matters is whether multiple people have the same problem and then yes, there is potential for an open source project and then setting it up is not the most difficult part. It used to be but it is not the most difficult part today.

AC: That’s true. That is a good point, so thank you for that, I like your response. All right Thierry, so we wanted to circle back a little bit about how community works and I’ll call them the levels of openness that can be done because some communities are much more directed than others and you know, as we’ve worked together for over these several years, I really like the notion of what we call the four opens.

Could you talk to us a little bit about that and about how that opens up a community and enables a lot of communication and I think a lot more contributions, so give us a little bit of flavor on what we call the Four Opens?

TC: Sure, so like we previously talked, we mentioned rules of engagement and I said that we need to define scope, principles and governance and the four opens would be an example of principles. Those are the principles that the open stack community was built on. The four opens are open source, because back then there weren’t any openly developed open source project that was not open core that would do a cloud software.

It was a way of saying that will do we will open source, not open core. There won’t be a proprietary edition of the product, we will not keep some features or to sell a proprietary edition of whatever, everything should be open source. There is open development, which seems really obvious now because like every open source project on earth is on some open GitForce somewhere that you can see what is happening but 12 years ago or 11 years ago when we started OpenStack, there weren’t anything like it.

Open development is about being able to see what’s happening in development transparently so all the patches, all the reeves, all of the issues or all of the discussions everything should be accessible and transparent without needing people to register or anything to see it happening, so transparency in development.

The third one, which was collectively known as well and we touched on it when we discussed design summits is open design. The fact that development is not done behind closed doors by an elite group of core developers, the design is discussed in the open engaging with the users during those open events that we are throwing and that model was replicated in other successful projects like Kubernetes for example.

Finally, open community. Open community is the idea that anyone can join the community and anyone can become a leader in that community. There is no pay to play, there is no like an enforcement that the leaders, the technical leaders from the project out coming from one of the major sponsors or it is completely disconnected. Technical governance is completely disconnected from any other foundation governance or anything.

It is really one contributor, one vote and you end up with elections and the most respected contributors get elected to the leadership buddies for the project. With those four opens, you actually have a very sustainable community because you really empower your community to participate. There is no place that they can’t see, there is no feature that they can’t use, there is no discussion that they can’t participate in and there is no level of leadership that they can’t attain and I feel like it’s been instrumental in the success of open stack.

It is also instrumental in the success of other communities that I’ve adopted them, if not in the letter, in the spirit and so I feel like it is a good model.

AC: I’m glad you pointed that out. Back when we first stated those Four Opens you’re right, they seemed almost revolutionary in some sense and they have become very adopted in many of the communities that I have participated in over these last years and to me that just says they work and that’s why I really like them, so thank you for elaborating on those. I want to go to that fourth one there, a little bit about open community and where things like the technical boards and so forth are elected not just appointed.

That kind of alludes that things are based on reputation, right? Your merits are earned in a community, so any advice on dos and don’ts on how a person would build a reputation in an open source community to help them be a strength particularly in that open community portion?

TC: Yes, you are right that if you have open elections it can turn into a popularity contests really quickly and then reputation is important. People think that you need to be a technical rock star to get to the level of reputation that will let you be elected as a leader for a project but it is actually not really true. From the things you can do, making yourself useful to others is really the key. Do the things that nobody else does.

Everyone is grateful for that to cover all that blind spot that nobody else is covering, you become really well-known across all of the community including in extremely large communities by doing those things that nobody else is doing and then you can leverage that reputation to get elected to the leadership positions like I said or you can influence decisions because nobody wants to piss off the person that actually does the things they don’t want to do.

That’s actually how I started in most of the communities I got involved in like when I joined Gentu in 2000, I ended up documenting security because it wasn’t documented the way I wanted to be. Clearly, documenting the security processes was not high on everyone’s list and by doing that, I earned a good reputation. I ended up leading the security team there, I ended up elected to the Gentu board of directors. It is really a theme and in open stack, I basically did the same.

I started with waste management, which was a non-development task again by documenting security and I ended up being elected to the technical committee for four years and ended up as a leader for that community by starting with a non-code contribution. It’s really not about code, it’s really not about being a technical rock star. It is really more about being useful to others.

AC: That’s great.

TC: In terms of don’t, don’t put in things you should not do, I would say you shouldn’t assume malicious intent because 99% of the cases in those communities, people try to do good and what is seen as potential malicious intent is actually breaks down to communication problems in the end and 99% of the time. It is really key to not jump to conclusions, give people a chance to voice their side of the story rather than packed in haste and make for a not very welcoming community and as a result.

AC: Well, thank you Thierry. This has been very, very interesting and very educational. I have learned some stuff as well and it reminded me a lot of good stuff. Thank you very much for helping us out today and joining us in this podcast. I very much appreciate it.

TC: Well, thanks Alan for the invitation.

AC: Thierry, this has been great.

[END OF INTERVIEW]

AC: For more information, check out community.suse.com and make sure to subscribe to the OCTOpod on your favorite podcast platform.

[END]

Category: Featured Content, Rancher Kubernetes Comments closed

Kubernetes for the Edge: Key Developments & Implementations

Dienstag, 11 Mai, 2021

Kubernetes is the key component in the data centers that are modernizing and adopting cloud native development architecture to deliver applications using containers. Capabilities like orchestrating VMs and containers together make Kubernetes the go-to platform for modern application infrastructure adopters. Telecom operators also use Kubernetes to orchestrate their applications in a distributed environment involving many edge nodes.

But due to the large scale of Telco networks that includes disparate cloud systems, Kubernetes adoption requires different architectures for different use cases. Specifically, if we look at a use case where Kubernetes is used to orchestrate edge workloads, there are various frameworks and public cloud-managed Kubernetes solutions available that offer different benefits and give telecom operators choices to select the best fit. In a recent Kubernetes on Edge Day sessions at KubeCon Europe 2021, many new use cases of Kubernetes for the edge have been discussed along with a showcase of cross-platform integration that may help enterprises adopting 5G edge and telecom operators to scale it to a high level.

Here is a high-level overview of some of the key sessions.

The Edge concept

Different concepts of edge have been discussed so far by different communities and technology solution experts. But when Kubernetes is coming into infrastructure, IT operators need to clearly understand the key pillars on which the Kubernetes deployment will seamlessly deliver low latency performance in telco or private 5G use cases. First, there should be a strong implementation of Kubernetes management at scale. Second, operators need to choose the lightweight K8s for edge solution, preferably certified by CNCF. And third, a lightweight OS should be deployed at every node from Cloud to the far edge.

Microsoft’s Akri Project: Microsoft’s Akri project is an innovation that will surely break into multiple Kubernetes-based edge implementations. It discovers and monitors far edge devices of brownfield devices that cannot have their own compute – can be a part of Kubernetes cluster. Akri platform will let these devices be exposed to the Kubernetes cluster.

AI/ML with TensorFlow: TensorFlow is a machine learning platform that takes inputs to generate insights. It can be deployed on the cloud, on-premises, or edge nodes where ML operations need to perform. One session showed that Kubernetes clusters deployed in the cloud and edge can host analytics tools set (Prometheus, EnMasse/MQQT, Apache Camel, AlertManager, Jupyter, etc.) to process ML requests with the lowest latency.

Architectures for Kubernetes on the edge: While deploying Kubernetes for an edge, many architecture choices are varied per use case. And each architecture poses new challenges. But the bottom line is that there is no one-size-fits-all solution as various workloads have different requirements and IT teams focus on connecting network nodes. So, the overall architecture needs to evolve into centralized and distributed control planes.

Robotics: Kubernetes has also been implemented in Robotics. Sony engineers have showcased how the K8s cluster systems can be used for distributed system integration of robots and to perform specific tasks collaboratively.

Laser-based Manufacturing: Another interesting use case discussed by Moritz Kröger, a Researcher at RWTH Chair for Lasertechnology leveraged a Kubernetes-based distributed system. Kubernetes features like automation configuration management and flexibility in moving workloads in clusters give operational benefits to Laser manufacturing machines.

OpenYurt + EdgeXFoundry: OpenYurt is yet another open source framework that extends the orchestration features of upstream Kubernetes to the edge. It is showcased that – it can integrate with EdgeXFoundtry in 5G IoT edge use cases where EdgeXFoundtry is used to manage the IoT devices and OpenYurt is used to handle server environments using OpenYurt plugins set.

Using GitOps: Kubernetes supports the cloud native application orchestration and declarative orchestration. Applying the GitOps approach to achieve the Zero Touch Provisioning at multiple edges from the central data center is possible.

Hong Kong-Zhuhai-Macao Bridge: Another use case discussed is – Kubernetes is implemented in edge infrastructure for managing applications that are managing sensors at Hong Kong-Zhuhai-Macao Bridge. The use case is unique as it focuses on defining the sensor devices on the bridge as CRD in Kubernetes, associating each device with the CI/CD, and managing and operating the Applications deployed on edge nodes.

Node Feature Discovery: Many end devices can be part of thousands of edge nodes connected to data centers. Similar to the Akri project, the Node Feature Discovery (NFD) add-on can detect and push into Kubernetes clusters to orchestrate with edge servers and cloud systems.

Kuiper and KubeEdge: EMQ’s Kuiper is open source data analytics/streaming software that runs on edge devices with low resource requirements. It can integrate with KubeEdge where we get a combined solution that leverages KubeEdge’s application orchestration capabilities and streaming analytics. The combined solution delivers low latency, saving cost on bandwidth, ease in implementing business logic, and operators can manage and deploy Kuiper software applications from the cloud.

What Comes After Kubernetes?

Freitag, 23 April, 2021

What Comes After Kubernetes?

You probably can’t believe I’m asking that question. It’s like showing up to a party and immediately asking about the afterparty. Is it really time to look for the exit?

No…but yes.

We used to deploy apps on systems in data centers. Then we moved the systems to the cloud. Then we moved the apps to containers. Then we wrapped it all in Kubernetes for orchestration, and here we are.

  • Have we arrived at PEAK IT?
  • Where do we go from here?

Each advance in technology unlocks doors we couldn’t reach before. As we move from room to room, we’re shifting gears, turning our momentum into energy to go faster and further.

Moving faster requires that we pay more attention to the road ahead, and it’s hard to do that while building the vehicle to take us there and building the road itself.

Whether you’re a business working on products for tomorrow’s world, or an individual who wants to know what skills will advance your career, you’re actually seeking leverage. Leverage gives you an edge over your competitors, and in today’s world, everyone is your competitor.

SUSECON Is Your Map to the Future

SUSECON, from May 18-20, 2021, is the first SUSE event that includes the people and products from Rancher. It packs the content of three events into a single digital platform with three worlds: LinuxWorld, KubeWorld and EdgeWorld.

Each world focuses on the solutions and strategies that its inhabitants care most about:

  • How does Kubernetes enable the next frontier of computing? (This information will shape your business decisions and career choices).
  • What are businesses doing to position themselves as trailblazers in the new frontier, and how can you follow in their footsteps?
  • What is adaptable Linux, and how can it drive digital transformation?

Within each world are keynotes delivered by SUSE leadership and customers from both SUSE and Rancher. Dozens of sessions range from introductory-level tutorials to advanced use cases for specific niche applications across Linux, Kubernetes, and Edge.

Every session was hand-picked to meet the needs of our diverse audience, from beginner to advanced, across topics that include:

  • AI/ML
  • Infrastructure and Operations
  • DevOps
  • Edge and IoT
  • Kubernetes
  • Linux
  • Open Source
  • Business Strategy
  • and more…

If you have questions, SUSECON is where you will find strategic answers.

Open Source Matters

Rancher and SUSE are both innovation leaders, and the combined company is a creative powerhouse. In just a few short months, developers have created solutions for real issues that everyone in the industry faces. These are core issues that slow developer and operations teams, and when solved, the entire organization will move faster.

  • How can I implement security policies in Kubernetes without increasing complexity or making my clusters harder to manage?
  • What can I do to protect myself from a supply chain attack on an upstream container base image?
  • What are the new features in Rancher 2.6?
  • How can I deploy hyperconverged infrastructure (HCI) without paying crippling license fees?
  • How can I use AI/ML to detect and respond to events before they become outages?
  • How can I help my developers build and deploy apps on Kubernetes without them having to learn everything about it?

At SUSECON, we’ll introduce you to projects that answer those questions, along with others that solve even more problems. These are all open source, built to help you succeed.

Open source is in our DNA. It’s the key to the democratization of opportunity, the single most effective solution to level the playing field and reward businesses for generating value. At SUSECON, you’ll learn just how important this is to us, with insights on:

  • Why is it important to be both open and interoperable?
  • What the word “open” means in “open source” (and how other companies use the term to trick you).
  • Why is Linux leadership essential to Kubernetes innovation?
  • How is freedom different from choice, and how does one complement the other?

SUSECON Is Your Event

SUSECON is a conference like none other you’ll attend this year. With actionable information in every session, you’ll leave the event with a plan for your future, and you’ll know the steps to take next on your journey.

I’m excited about it. When SUSE acquired Rancher, there were concerns that Rancher users would lose the freedoms they had. We promised you that wouldn’t happen, and SUSECON is our chance to show you the full power of the combined organization. Not only is Rancher still free and open source, but there is also a non-stop torrent of open source software that we’re adding to the portfolio. Any of those projects could change your world as much as Rancher, K3s, RKE and Longhorn did.

Head over to the event site to browse sessions and sign up for free.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed