SUSE Linux Enterprise Server: The Underlying Structure of a Transition to SAP S/4HANA

Thursday, 28 October, 2021

For many businesses worldwide, the transition to SAP S/4HANA is in full swing, whether they make a concrete plan or define timelines for their transition projects. And some have even gone as far as completing their transition altogether.

 

But there’s another side to their story that is not often discussed: the foundation underlying a sophisticated data model, streamlined processes, and ready-to-consume extensions. In most cases, this underlying is built on top of SUSE Linux Enterprise (SLE), hosting majority of the SAP HANA database.

 

The SLE platform has a proven track record for providing exceptional reliability and stability for both on-premise and cloud deployments and directly collaborates with all major cloud service providers. For example, it maintains images used in hyperscaler environments and updates them continuously to put SAP customers in a good position with their deployment projects.

 

Going beyond the operating system

 

At SUSE, a comprehensive solutions portfolio is offered – including the Linux operating system (OS), system management, and container-based workloads – to help organizations deliver enterprise-ready solutions based on innovative open-source projects. We invest in dedicated features for SAP customers, especially high-availability (HA) solutions for SAP S/4HANA, SAP HANA, and the Linux-based SAP NetWeaver technology platform.

 

For SAP customers with SAP HANA at the heart of their ERP system, downtime can be costly and have broad implications on their business operations. In response, SAP and SUSE have joined forces to provide the first open source–based solution to scale up and scale out SAP HANA to run HA configurations to maximize uptime and automate a reliable failover.

 

SUSE has also introduced features to simplify OS administration for SAP capabilities, such as automatic handling of the SAP Notes tool and performance improvements (for example, workload memory protection). And we continue to improve and enhance these features as SAP develops innovations for its technology stack.

 

While all these features have been around for years, the world is challenging the velocity, flexibility, and data volumes that modern businesses are expected to possess. Innovations and new technologies appear every day, especially on cloud services that have been strong contributors to the digital transformation of SAP applications. SUSE is also taking part in this journey by establishing strong relationships with cloud service providers and introducing critical innovations.

 

Into the IT system lifecycle

 

When considering the IT system lifecycle, there are always two phases:

 

  • Day 1: Setup, configuration, and go-live
  • Day 2: Ongoing system maintenance, monitoring, and patching

 

Day one in the cloud, our customers can quickly ramp up, accessing resources and computing. They can order a virtual machine that is accessible within a few minutes and allow their admins to start installing SAP software.

 

But why not take this capability a step further by automating the deployment of SAP software? SUSE launched a project to achieve this objective about 18 months ago. The goal is to enable our customers to define their SAP software environment, run the deployment without IT intervention, and set up machines in the cloud and advanced configurations such as clustering.

 

SUSE customers also have access to specific knowledge and best-practice documents to configure their HA stack in the best way possible. For example, we provide step-by-step guidance on optimizing the performance of SAP HANA through a scale-up deployment.

 

The deployment automation feature also fully automates the HA stack configuration, helping ensure consistency and a setup that is in line with SUSEs recommendations. This concept is also adopted by cloud service providers and system integrators, some of which use SUSE technology in their solutions and other open-source frameworks. (Remember, we are an open-source company and make our technology available to the entire technology provider community.)

 

For example, we have joined forces with the Microsoft Azure team to support their deployment of SAP software. We contribute code with a focus on SUSE Linux Enterprise Server and the SUSE HA stack setup. When SAP software is installed and configured, SAP consultants can customize the SAP system to meet the customer’s needs. They define and implement business objects and processes, then move the system into production.

 

For day two operations, SUSE has invested heavily into centralized monitoring capabilities to expose critical data generated by machines hosting SAP software. Other open-source software can consume the information, such as a version provided or supported by SUSE and other preexisting third-party monitoring solutions. With this capability, projects, such as Microsoft Azure’s, have gained traction quickly. In fact, the initial development of Microsoft Azure Monitor for SAP solutions relies on SUSE systems as a monitoring exporter.

 

Fueling the future of monitoring

 

But innovation around our monitoring capabilities doesn’t end with our collaboration with Microsoft Azure. We continue to deliver additional key features, such as log aggregation and smart alerting. Even more important is the opportunity to merge our monitoring feature with another new initiative: Project Trento.

 

One of our customers’ most significant challenges is complex OS configurations, especially in the HA stack. Over the last few years, the SUSE Engineering Lab and Services teams have been engaged in situations where customers faced outages due to incorrect configurations in their production environment. Project Trento is designed to address this issue step by step.

 

Project Trento was born around the concept of configuration compliance of the SUSE HA stack configuration, and its components are based on the customer’s deployment setup and environment. SUSE is turning its knowledge and experience into automated checks executed systematically on SUSE-based systems. If an inconsistency emerges between hosts in a cluster or parameter that is not recommended as a best practice, the admin is warned of the problem’s severity. The system then provides guidance on correcting the problem and moving the system back into a compliant state. Therefore, administrators can have the peace of mind that, for example, a failover will work as expected and not fail due to a mistake – such as a typo in the configuration file – made along the way.

 

Additionally, Project Trento will check the OS configuration, display checks on SAP Notes, and manage configuration changes on the upcoming version – all of which are accessible in a simple-to-use Web interface. Even with Project Trento available as alpha code, our customers can apply it to their environment and realize considerable value in the first set of capabilities.

 

The initial version of Project Trento is planned for release by the end of 2021, with plans for more platforms, configurations, and features in future releases.

 

Supporting the journey to SAP S/4HANA

 

From our perspective, SUSE’s value is based on our ability to provide the best-possible starting point for businesses transitioning to SAP S/4HANA. And making Linux a part of the effort is the first step toward that experience.

 

With our on-site team in Walldorf, Germany, we also look to provide the best service to our customers by working and collaborating as closely as possible with our SAP colleagues in development, customer support, and product management.

 

If you want to learn more about SUSE, visit http://suse.com/teched or contact us sapalliance@suse.com.

SUSE Rancher and CrowdStrike

Thursday, 16 September, 2021

SUSE One Partner, Crowdstrike, has an offering live in the SUSE Rancher Apps and Marketplace and we’ve invited Crowdstrike to author a guest blog so you can learn more about their breach prevention solution.  ~Bret

Stop Breaches and Secure Your Applications in the Cloud

Guest Blog Author: Gabriel Alford, Senior Solution Architect, Crowdstrike

How do you ensure your containers and microservices remain secure and compliant? Containers managed by multiple Kubernetes clusters can cause your DevOps and security teams to get overwhelmed with operational and security challenges given the lack of visibility and increased complexity. Poor visibility, fragmented and complex tools, misconfigurations for cloud workloads, and the inability to maintain compliance can easily elevate your risk of a breach. DevOps and security teams need tools that address the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure and provide integrated tools for running containerized workloads seamlessly.

Gain layered security for your Kubernetes clusters to ensure confidence when building and running applications in the cloud with SUSE Rancher and CrowdStrike. SUSE Rancher not only deploys production-grade Kubernetes clusters from datacenter to cloud to the edge, it also unites them with centralized authentication, access control and observability. To ensure you are completely secure, CrowdStrike Falcon® Cloud Workload Protection (CWP) provides comprehensive breach protection for workloads and containers by staying ahead of adversaries, reducing the attack surface and obtaining total real-time visibility of events taking place in your environment. 

CrowdStrike and the Rancher Apps and Marketplace

CrowdStrike Falcon CWP works with SUSE Rancher to automatically protect your Kubernetes Control Plane and Worker nodes, allowing your DevSecOps team to securely build applications in the cloud with confidence. The CrowdStrike Helm Chart, offered in the Rancher Apps and Marketplace, allows you to deploy and manage applications across cloud environments, ensuring multi-cluster consistency with a single deployment. By layering SUSE Rancher and CrowdStrike together, you can save time and effort with in-depth defense against data breaches, optimized for cloud deployments.

So, why use the Falcon Sensor with SUSE Rancher?

  • Unified multi-cluster management: SUSE Rancher unites Kubernetes clusters with centralized authentication and access control, provisioning, version management, visibility and diagnostics, monitoring, alerting and centralized audit.
  • Hybrid and multi-cloud support: Manage on-premises clusters and those hosted on cloud services like AKS, EKS and GKE from a unified view, without impacting performance. 
  • Broad support for container runtime security: Secure applications with the new CrowdStrike Falcon Container sensor that is uniquely designed to run as an unprivileged container in a pod.

Breach Prevention for Cloud Workloads and Containers

CrowdStrike Falcon CWP provides comprehensive breach protection for workloads and containers, enabling you to build, run and secure applications with speed and confidence. CrowdStrike’s experience in operating one of the largest security clouds in the world provides unique insights into adversaries, enabling the delivery of purpose-built CrowdStrike solutions that create less work for DevSecOps teams, so you can defend against data breaches and optimize cloud deployments.

CrowStrike Falcon CWP helps you:

  • Gain complete visibility across your entire cloud estate in a single platform
  • Prevent attacks and avoid business disruption
  • Eliminate friction and stay secure while building in the cloud
  • Achieve protection for the Kubernetes Control Plane and Worker nodes

Get Started and Secure Your Applications in the Cloud

With SUSE Rancher and CrowdStrike, you can feel confident that your containers and microservices remain secure and compliant with cloud-native and comprehensive breach protection. By layering security for your Kubernetes clusters, building and running applications in the cloud is made simple and secure, without any additional operational friction. Get started with CrowdStrike and SUSE Rancher by discovering the CrowdStrike Helm Chart in the Rancher Apps and Marketplace

Want to learn more about how CrowdStrike and SUSE can help you solve critical security challenges? Visit our website for more information or get in touch with the CrowdStrike team. Otherwise, contact your SUSE Rancher sales representative – we look forward to talking to you!

Gabriel Alford is a Senior Solution Architect in CrowdStrike’s Partner & Alliances organization where he collaborates with Cloud Service Providers and Cloud ISVs on integrating and certifying CrowdStrike products on partner platforms as well as creating joint partner technical solutions. He has over 10 years experience in security and compliance, and his most recent projects include building CrowdStrike’s Kubernetes Operator, Helm Charts, and GitHub Actions integration.

Stupid Simple Service Mesh: What, When, Why Part 1

Thursday, 26 August, 2021

Recently microservices-based applications became very popular, and with the rise of microservices, the concept of Service Mesh also became a very hot topic. Unfortunately, there are only a few articles about this concept and most of them are hard to digest.

In this blog, we will try to demystify the concept of Service Mesh using “Stupid Simple” explanations, diagrams, and examples to make this concept more transparent and accessible for everyone. In the first article, we will discuss the basic building blocks of a Service Mesh and implement a sample application to have a practical example of each theoretical concept. In the next articles, based on this sample app, we will touch on more advanced topics, like Service Mesh in Kubernetes, and we will talk about some more advanced Service Mesh implementations like IstioLinkerd, etc.

To understand the concept of Service Mesh, the first step is to understand what problems it solves and how it solves them.

Software architecture has evolved a lot in a short time, from classical monolithic architecture to microservices. Although many praise microservice architecture as the holy grail of software development, it introduces some serious challenges.

Overview of the sample application

For one, a microservices-based architecture means that we have a distributed system. Every distributed system has challenges such as transparencysecurityscalabilitytroubleshooting, and identifying the root cause of issues. In a monolithic system, we can find the root cause of a failure by tracing. But in a microservice-based system, each service can be written in different languages, so tracing is no trivial task. Another challenge is service-to-service communication. Instead of focusing on business logic, developers must take care of service discovery, handle connection errors, detect latency, and retry logic. Applying SOLID principles on the architecture level means that these kinds of network problems should be abstracted away and not mixed with the business logic. This is why we need Service Mesh.

Ingress Controller vs. API Gateway vs. Service Mesh

As I mentioned above, we need to apply SOLID principles on an architectural level. For this, it is important to set the boundaries between Ingress Controller, API Gateway, and Service Mesh and understand each one’s role and responsibility.

On a stupidly oversimplified level, these are the responsibilities of each concept:

  1. Ingress Controller: allows a single IP port to access all services from the cluster, so its main responsibilities are path mapping, routing and simple load balancing, like a reverse proxy
  2. API Gatewayaggregates and abstracts away APIs; other responsibilities are rate-limiting, authentication, and security, tracing, etc. In a microservices-based application, you need a way to distribute the requests to different services, gather the responses from multiple/all microservices, and then prepare the final response to be sent to the caller. This is what an API Gateway is meant to do. It is responsible for client-to-service communication, north-south traffic.
  3. Service Mesh: responsible for service-to-service communication, east-west traffic. We’ll dig more into the concept of Service Mesh in the next section.

Service Mesh and API Gateway have overlapping functionalities, such as rate limiting, security, service discovery, tracing, etc. but they work on different levels and solve different problems. Service Mesh is responsible for the flow of requests between services. API Gateway is responsible for the flow of requests between the client and the services, aggregating multiple services and creating and sending the final response to the client.

The main responsibility of an API gateway is to accept traffic from outside your network and distribute it internally, while the main responsibility of a service mesh is to route and manage traffic within your network. They are complementary concepts, and a well-defined microservices-based system should combine them to ensure application uptime and resiliency while ensuring that your applications are easily consumable.

What Does a Service Mesh Solve?

As an oversimplified and stupid simple definition, a Service Mesh is an abstraction layer hiding away and separating networking-related logic from business logic. This way developers can focus only on implementing business logic. We implement this abstraction using a proxy, which sits in the front of the service. It takes care of all the network-related problems. This allows the service to focus on what is really important: business logic. In a microservice-based architecture, we have multiple services, each with a proxy. Together, these proxies are called Service Mesh.

As best practices suggest, proxy and service should be in separate containers, so each container has a single responsibility. In the world of Kubernetes, the container of the proxy is implemented as a sidecar. This means that each service has a sidecar containing the proxy. A single Pod will contain two containers: the service and the sidecar. Another implementation is to use one proxy for multiple pods. In this case, the proxy can be implemented as a Deamonset. The most common solution is using sidecars. Personally, I prefer sidecars over Deamonsets because they keep the logic of the proxy as simple as possible.

Multiple Service Mesh solutions exist, including IstioLinkerdConsulKong, and Cilium. (We will talk about these solutions in a later article.) Let’s focus on the basics and understand the concept of Service Mesh, starting with Envoy. This is a high-performance proxy and not a complete framework or solution for Service Meshes (in this tutorial, we will build our own Service Mesh solution). Some of the Service Mesh solutions use Envoy in the background (like Istio), so before starting with these higher-level solutions, it’s a good idea to understand the low-level functioning.

Understanding Envoy

Ingress and Egress

Simple definitions:

  • Any traffic sent to the server (service) is called ingress.
  • Any traffic sent from the server (service) is called egress.

The Ingress and the Egress rules should be added to the configuration of the Envoy proxy, so the sidecar will take care of these. This means that any traffic to the service will first go to the Envoy sidecar. Then the Envoy proxy redirects the traffic to the real service. Vice-versa, any traffic from this service will go to the Envoy proxy first, and Envoy resolves the destination service using Service Discovery. By intercepting the inbound and outbound traffic, Envoy can implement service discovery, circuit breaker, rate limiting, etc.

The Structure of an Envoy Proxy Configuration File

Every Envoy configuration file has the following components:

  1. Listeners: where we configure the IP and the Port number that the Envoy proxy listens to
  2. Routes: the received request will be routed to a cluster based on rules. For example, we can have path matching rules and prefix rewrite rules to select the service that should handle a request for a specific path/subdomain. Actually, the route is just another type of filter, which is mandatory. Otherwise, the proxy doesn’t know where to route our request.
  3. Filters: Filters can be chained and are used to enforce different rules, such as rate-limiting, route mutation, manipulation of the requests, etc.
  4. Clusters: act as a manager for a group of logically similar services (the cluster has similar responsibility as a service in Kubernetes; it defines the way a service can be accessed), and acts as a load balancer between the services.
  5. Service/Host: the concrete service that handles and responds to the request

Here is an example of an Envoy configuration file:

---
admin:  
  access_log_path: "/tmp/admin_access.log"  
  address:     
    socket_address: 
      address: "127.0.0.1" 
      port_value: 9901
static_resources:   
  listeners:    
   -       
      name: "http_listener"      
      address:         
        socket_address:           
          address: "0.0.0.0"          
          port_value: 80      
      filter_chains:          
        filters:             
          -               
            name: "envoy.http_connection_manager"              
            config:                
              stat_prefix: "ingress"                
              codec_type: "AUTO"               
              generate_request_id: true                
              route_config:                   
                name: "local_route"                  
                virtual_hosts:                    
                  -                       
                    name: "http-route"                      
                    domains:                         
                      - "*"                      
                    routes:                       
                      -                           
                        match:                             
                          prefix: "/nestjs"                          
                        route:                            
                          prefix_rewrite: "/"                            
                          cluster: "nestjs"                        
                      -                           
                        match:                             
                            prefix: "/nodejs"                          
                          route:                            
                            prefix_rewrite: "/"                            
                            cluster: "nodejs"                         
                       -                           
                         match:                             
                           path: "/"                          
                         route:                            
                           cluster: "base"                
              http_filters:                  
                -                     
                  name: "envoy.router"                    
                  config: {}  

  clusters:    
    -       
      name: "base"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -           
          socket_address:             
            address: "service_1_envoy"            
            port_value: 8786        
        -           
          socket_address:             
            address: "service_2_envoy"            
            port_value: 8789        
    -      
      name: "nodejs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_4_envoy"            
            port_value: 8792        
    -      
      name: "nestjs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_5_envoy"            
            port_value: 8793

The configuration file above translates into the following diagram:

This diagram did not include all configuration files for all the services, but it is enough to understand the basics. You can find this code in my Stupid Simple Service Mesh repository.

As you can see, between lines 10-15 we defined the Listener for our Envoy proxy. Because we are working in Docker, the host is 0.0.0.0.

After configuring the listener, between lines 15-52 we define the Filters. For simplicity we used only the basic filters to match the routes and rewrite the target routes. In this case, if the subdomain is “host:port/nodeJs,” the router will choose the nodejs cluster and the URL will be rewritten to “host:port/” (this way the request for the concrete service won’t contain the /nodesJs part). The logic is the same also in the case of “host:port/nestJs”. If we don’t have a subdomain in the request, then the request will be routed to the cluster called base without a prefix rewrite filter.

Between lines 53-89 we defined the clusters. The base cluster will have two services; the chosen load-balancing strategy is round-robin. Other available strategies can be found here. The other two clusters (nodejs and nestjs) are simple, with only a single service.

The complete code for this tutorial can be found in my Stupid Simple Service Mesh git repository.

Conclusion

In this article, we learned about the basic concepts of Service Mesh. In the first part, we understood the responsibilities and differences between the Ingress Controller, API Gateway, and Service Mesh. Then we talked about what Service Mesh is and what problems it solves. In the second part, we introduced Envoy, a performant and popular proxy used to build our Service Mesh example. We learned about the different parts of the Envoy configuration files and created a Service Mesh with five example services and a front-facing edge proxy.

In the next article, we will look at how to use Service Mesh with Kubernetes and will create an example project that can be used as a starting point in any project using microservices.

There is another ongoing “Stupid Simple AI” series. The first two articles can be found here: SVM, Kernel SVM, and KNN in Python.

Thank you for reading this article!

Stupid Simple Open Source

Thursday, 26 August, 2021

Even if we don’t realize it, almost all of us have used open source software. When we buy a new Android phone, we read its specs and, usually, focus on the hardware capabilities, like CPU, RAM, camera, etc. The brains of these tools are their operating systems, which are open source software. The Android operating system powers more than 70 percent of mobile phones, demonstrating the prowess of open source software.

Before the free software movement, the first personal computer was hard to maintain and expensive; this wasn’t because of the hardware but the software. You could be the best programmer in the world, but without collaboration and knowledge sharing, your software creation will likely have issues: bugs, usability problems, design problems, performance issues, etc. What’s more, maintaining these products will cost time and money. Before the appearance of open source software, big companies believed they had to protect their intellectual property, so they kept the source code secret. They did not realize that letting people inspect their source codes and fix bugs would improve their software. Collaboration leads to great success.

What is Open Source Software?

Simply put, open source software has public source code, which can be seeninspectedmodifiedimproved or even sold by anyone. In contrast, non-open source, proprietary software has code that can be seen, modified and maintained only by a limited amount of people, a person, a team or an organization.

In both cases, the user must accept the licensing agreements. To use proprietary software, users must promise (typically by signing a license displayed the first time they run it) that they will not do anything with the software that its developers/owners have not explicitly authorized. Examples of proprietary software are the Windows operating system and Microsoft Office.

Users must accept the terms of a license when using open source software, just as they do when using proprietary software, but these terms are very different. Basically, you can do whatever you want as long as you include the original copyright and license notice in any copy of the software/source. Furthermore, these licenses usually state that the original creator cannot be liable for any harm or damage that the open source code may cause. This protects the creator of the open source code. Good examples of open source software are the Linux operating system, the Android operating system, LibreOffice and Kubernetes.

The Beginning of Open Source

Initially, software was developed by companies in-house. The creators controlled this software, with no right for the user to modify it, fix it or even inspect it. This also made collaboration between programmers difficult as knowledge sharing was near impossible.

In 1971, Richard Stallman joined the MIT Artificial Intelligence Lab. He noticed that most MIT developers were joining private corporations, which were not sharing knowledge with the outside world. He realized that this privacy and lack of collaboration would create a bigger gap between users and technical developers. According to Stallman, “software is meant to be free but in terms of accessibility and not price.” To fight against privatization, Stallman developed the GNU Project and then founded the Free Software Foundation (FSF). Many developers started using GNU in response to these initiatives, and many even fixed bugs they detected.

Stallman’s initiative was a success. Because he pushed against privatized software, more open source projects followed. The next big steps in open source software were the releases of Mozilla and the Linux operating system. Companies had begun to realize that open source might be the next big thing.

The Rise of Open Source

After the GNU, Mozilla, and Linux open source projects, more developers started to follow the open source movement. As the next big step in the history of open source, David Heinemeier Hansson introduced Ruby on Rails. This web application framework soon became one of the world’s most prominent web development tools. Popular platforms like Twitter would go on to use Ruby on Rails to develop their sites. When Sun Microsystems bought MySql for 1 billion dollars in 2008, it showed that open source could also be a real business, not just a beautiful idea.

Nowadays, big companies like IBM, Microsoft and Google embrace open source. So, why do these big companies give away their fearfully guarded source code? They realized the power of collaboration and knowledge sharing. They hoped that outside developers would improve the software as they adapted it to their needs. They realized that it is impossible to hire all the great developers of the world, and many developers are out there who could positively contribute to their product. It worked. Hundreds of outsiders collaborated on one of the most successful AI tools at Google, Tensorflow, which was a great success. Another success story is Microsoft’s open source .Net Core.

Why Would I Work on Open Source Projects?

Just think about it: how many times have open source solutions (libraries, frameworks, etc.) helped you in your daily job? How often did you finish your tasks earlier because you’d found a great open source, free tool that worked for you?

The most important reason to participate in the open source community is to help others and to give something back to the community. Open source has helped us a lot, shaping our world unprecedentedly. We may not realize it, but many of the products we are using currently result from open source.

In a modern world, collaboration and knowledge sharing are a must. Nowadays, inventions are rarely created by a single individual. Increasingly, they are made through collaboration with people from all around the world. Without the movement of free and open source software, our world would be completely different.  We’d live with isolated knowledge and isolated people, lots of small bubble worlds, and not a big, collaborative and helpful community (think about what you would do without StackOverflow?).

Another reason to participate is to gain real-world experience and technical upskilling. In the open source community, you can find all kinds of challenges that aren’t present in a single company or project. You can also earn recognition through problem-solving and helping developers with similar issues.

Finding Open Source Projects

If you would like to start contributing to the open source community, here are some places where you can find great projects:

CodeTriage: a website where you can find popular open source projects based on your programming language preferences. You’ll see popular open source projects like K8sTensorflowPandasScikit-LearnElasticsearch, etc.

awesome-for-beginners: a collection of Git repositories with beginner-friendly projects.

Open Source Friday: a movement to encourage people, companies and maintainers to contribute a few hours to open source software every Friday.

For more information about how to start contributing to open source projects, visit the newbie open source Git repository.

Conclusion

In the first part of this article, we briefly introduced open source. We described the main differences between open source and proprietary software and presented a brief history of the open source and free software movement.

In the second part, we presented the benefits of working on open source projects. In the last part, we gave instructions on how to start contributing to the open source community and how to find relevant projects.

Tags: Category: Cloud Computing, DevOps, Digital Transformation Comments closed

Harvester: Intro and Setup    

Tuesday, 17 August, 2021
I mentioned about a month back that I was using Harvester in my home lab. I didn’t go into much detail, so this post will bring some more depth. We will cover what Harvester does, as well as my hardware, installation, setup and how to deploy your first virtual machine. Let’s get started.

What is Harvester?

Harvester is Rancher’s open source answer to a hyperconverged infrastructure platform. Like most things Rancher is involved with, it is built on Kubernetes using tools like KubeVirt and Longhorn. KubeVirt is an exciting project that leverages KVM and libvirt to run virtual machines inside Kubernetes; this allows you to run both containers and VMs in your cluster. It reduces operational overhead and provides consistency. This combination of tried and tested technologies provides an open source solution in this space.

It is also designed to be used with bare metal, making it an excellent option for a home lab.

Hardware

If you check the hardware requirements, you will notice they focus more on business usage. So far, my personal experience says that you want at least a 4 core/8 thread CPU, 16GB of RAM, and a large SSD, preferably an NVMe drive. Anything less resource-wise doesn’t leave enough capacity for running many containers or VMs. I will install it on an Intel NUC 8i5BEK, which has an Intel Core i5-8259U. As far as RAM, it has 32GB of RAM and a 512GB NVMe drive. It can handle running Harvester without any issues. Of course, this is just my experience; your experience may differ.

Installation

Harvester ships as an ISO, which you can download on the GitHub Releases page. You can pull it quickly using wget.

$ wget https://releases.rancher.com/harvester/v0.2.0/harvester-amd64.iso

Once you have it downloaded, you will need to create a bootable USB. I typically use Balena Etcher since it is cross-platform and intuitive. Once you have a bootable USB, place it in the machine you want to use and boot the drive. This screen should greet you:

Select “New Cluster”:

Select the drive you want to use.

Enter your hostname, select your network interface, and make sure you use automatic DHCP.

You will then be prompted to enter your cluster token. This can be any phrase you want; I recommend using your password manager to generate one.

Set a password to use, and remember that the default user name is rancher.

The following several options are attractive, especially if you want to leverage your SSH keys used in GitHub. Since this is a home lab, I left the SSH keys, proxy and cloud-init setup blank. In an enterprise environment, this would be really useful. Now you will see the final screen before installation. Verify that everything is configured to your desires before proceeding.

If it all looks great, proceed with the installation. It will take a few minutes to complete; when it does, you will need to reboot.

After the reboot, the system will startup, and you will see a screen letting you know the URL for Harvester and the system’s status. Wait until it reports that Harvester is ready before trying to connect.

Great! It is now reporting that it is up and running, so it’s now time to set up Harvester.

Initial Setup

We can navigate to the URL listed once the OS boots. Mine is https://harvest:30443. It uses a self-signed certificate by default, so you will see a warning in your browser. Just click on “advanced” to proceed, and accept it. Set a password for the default admin account.

Now you should see the dashboard and the health of the system.

I like to disable the default account and add my own account for authentication. Probably not necessary for a home lab, but a good habit to get into. First, you need to navigate to it.

Now log out and back in with your new account. Once that’s finished, we can create our first VM.

Deploying Your First VM

Harvester has native support for qcow2 images and can import those from a URL. Let’s grab the URL for openSUSE Leap 15.3 JeOS image.

https://download.opensuse.org/distribution/leap/15.3/appliances/openSUSE-Leap-15.3-JeOS.x86_64-kvm-and-xen.qcow2

The JeOS image for openSUSE is roughly 225MB, which is a perfect size for downloading and creating VMs quickly. Let’s make the image in Harvester.

Create a new image, and add the URL above as the image URL.

You should now see it listed.

Now we can create a VM using that image. Navigate to the VM screen.

Once we’ve made our way to the VM screen, we’ll create a new VM.

When that is complete, the VM will show up in the list. Wait until it has been started, then you can start using it.

Wrapping Up

In this article, I wanted to show you how to set up VMs with Harvester, even starting from scratch! There are plenty of features to explore and plenty more on the roadmap. This project is still early in its life, so now is a great time to jump in and get involved with its direction.

What Managed Service Providers Can Now Expect from SUSE

Tuesday, 17 August, 2021

SUSE has launched a new pricing and technical support offering for Managed Service Providers (MSPs).

The offering is focused on new pricing models coupled with end-to-end technical support to better serve our MSP partners. This will enable MSPs to deliver better offerings to their customers. Designed to serve our SUSE One Partner Program MANAGE specialization Partners with ease-of-use, reduce time to market and increase flexibility, our new price offering features one partner subscription for every class of product on our price list with two tiered options of end-to-end technical support. 

“We listened to our partners and revamped our pricing and support model for the SUSE One Partner Program, MANAGE Specialization. This enables Managed Service Providers to have flexibility, ease-of-use and access to best-in-class technical support. Previously the approach was a little too complex with SKUs tied to only one level of support. The newly designed program offers comprehensive customized solutions through our global distributor partnerships,” said Bill Innis, Director of Cloud and MANAGE Service Providers, SUSE.

As your partner, our goal is to help reduce complexity, support you as you grow your business, minimize human error and risk to create solutions that save you time and money.

We know organizations value trusted partners that can deliver digital transformation fast, efficiently, cost-effective and open. The new pricing and support model enable us to deliver faster to market services, flexibility and overall support. The new changes include:

• Simplified contracts, now automated for faster implementation
• Faster onboarding
• Clarified requirements and benefits
• All subscriptions include priority, 24/7, unlimited technical support
• New price list and part numbers to eliminate complexities

Watch: SUSE One Partner Program MANAGE Specialization – Update for Manage Service Providers

Managed Service Providers – The Future is Open!

SUSE open source offers the ability to avoid vendor lock-in which can limit commercial agility in the long run and stymie the ability to innovate and bring new service offerings to market that unlock more revenue streams.

We are here to help MSPs deliver for their customers through our commitment to truly open source, SUSE helps MSPs conquer their customers’ IT complexity with the combination of enterprise-grade technology alongside the freedom to build secure, risk-free, reliable, compliant, and rapidly deployable solutions that can be bundled according to customer requirements.

Additionally, SUSE’s acquisition of Rancher, the most widely used Kubernetes management platform today, paves the way for MSPs to provide valuable services to enterprises adopting containers and Kubernetes to advance their digital transformation, datacenter modernization, edge and cloud projects.
Our SUSE One Partner Program has been awarded a 5-Star rating in the 2021 CRN Partner Program Guide (a distinction only given to vendors who go above and beyond in their partner program offerings).

Our new pricing and support model coupled with all that SUSE has to offer strengthens the path for innovators that want to make a big difference, with digital transformation in their managed services.

The future is open, come on in!

We are ready to kick-off the opportunity to partner with us and jump start your journey for digital transformation for you customers, Contact us at msp@suse.com to start the conversation!

SAP HANA Scale-Out System Replication Multi-Target Upgrade

Tuesday, 10 August, 2021

Starting with version 0.180 the SAPHanaSR-ScaleOut package supports SAP HANA scale-out multi-target system replication. That means you can connect a third HANA site by system replication to either of the two HANA sites which are managed by the SUSE HA cluster. If you are already running a SAP HANA scale-out database with SUSE HA, you can perform a SAP HANA Scale-Out Multi-Target Upgrade.

Picture: On the way to Multi-Target Replication

In this blog article you will learn how to upgrade the SUSE HA cluster and the HANA HADR provider from old-style to the multi-target aware setup.

Our blog SAPHanaSR-ScaleOut for large ERP systems describes the old-style. The blog article SAPHanaSR-ScaleOut for Multi-Target explains the new multi-target aware setup. So now let us learn about the upgrade procedure by discussing seven questions:

  • When do I need to upgrade my cluster configuration?
  • Which cluster attributes and configuration items will change?
  • How does the overall upgrade procedure look like?
  • Which prerequisites are needed for upgrading to multi-target srHook attributes?
  • What exactly are the tasks I need to do?
  • Where can I find further information?
  • What to take with?

When do I need to upgrade my cluster configuration?

Two situations make it advised upgrading the SUSE HA cluster from supporting HANA scale-out old-style system replication to multi-target system replication:

  • HANA multi-target system replication is a business need.
  • A regularly scheduled software update includes SAPHanaSR-ScaleOut 0.180 and it is not absolutely clear that HANA multi-target system replication will never become a topic in the future.

If no multi-target support is needed, but the SAPHanaSR-ScaleOut package is updated, than you do not need changing configuration or other additional action. Just remember to reload the HANA HA/DR provider hook script SAPHanaSR.py on both sites after the package upgrade.

Which cluster attributes and configuration items will change?

Inside the SUSE HA CIB the major attribute srHook for the HA/DR provider system replication status will change. Up to three site-specific attributes will replace the former single global one. In addition, a set of six newly introduced auxiliary attributes simplyfies this and future updates. For example, new node attributes gra and gsh show the version of the resource agent and the HA/DR provider hook script. Manual page SAPHanaSR-manageAttr(8) contains more details.

Global cib-time                 prim sec srHook sync_state
-----------------------------------------------------------
C11    Mon Jun 15 17:40:59 2020 S2   S1  SOK   SOK

Site lpt        lss mns    srHook srr
--------------------------------------
S1   30         4   suse11 SOK    S
S2   1592235659 4   suse21 PRIM   P
S3                         SOK

Hosts  clone_state node_state roles                         score  site
------------------------------------------------------------------------
suse11 DEMOTED     online     master1:master:worker:master  100    S1
suse12 DEMOTED     online     slave:slave:worker:slave      -12200 S1
suse21 PROMOTED    online     master1:master:worker:master  150    S2
suse22 DEMOTED     online     slave:slave:worker:slave      -10000 S2
suse00             online

Example: SAPHanaSR-showAttr with both attributes

In the “Global” section at the top you see the old-style srHook, coloured orange. The new multi-target site-specific srHook attributes appear in the “Site” section in the middle, coloured green. The above CIB attributes are recorded in lab for illustrating the changes. In real life you must not mix old-style and multi-target attributes.

The old-style HANA HA/DR provider hook script SAPHanaSR.py will be replaced by the multi-target aware SAPHanaSrMultiTarget.py. To accomplish that, HANA’s global.ini configuration needs to be changed in memory and on disk. Finally on OS level outside the SUSE HA cluster the sudoers permission needs to be adapted to the new HA/DR provider hook script. Manual page SAPHanaSrMultiTarget.py(7) and the sections below are showing details.

How does the overall upgrade procedure look like?

A certain procedure leads from defined entry state with old-style HANA HA/DR provider hook and SUSE HA cluster into defined target state with multi-target enabled hook and cluster. Blog article SAPHanaSR-ScaleOut for large ERP systems describes the entry state. You can find the target state described in detail in a separate blog article SAPHanaSR-ScaleOut for Multi-Target . Now let us have an overview of the needed steps. You will find details later.

At a glance the upgrade procedure looks like this:

  1. Initially check if everything looks fine.
  2. Set SUSE HA cluster resources SAPHanaController and SAPHanaTopology into maintenance.
  3. Install multi-target aware SAPHanaSR-ScaleOut package on all nodes.
  4. Adapt sudoers permission on all nodes.
  5. Replace HANA HA/DR provider configuration on both sites.
  6. Check SUSE HA and HANA HA/DR provider for matching defined upgrade entry state.
  7. Upgrade srHook attribute from old-style to multi-target.
  8. Check SUSE HA cluster for matching defined upgrade target state.
  9. Set SUSE HA cluster resources SAPHanaController and SAPHanaTopology from maintenance to managed.
  10. Optionally connect third HANA site via system replication outside of the SUSE HA cluster.
  11. Finally check if everything looks fine.

As final result of this procedure, the RAs and hook script are upgraded from old-style to multi-target. Further the SUSE HA cluster’s old-style global srHook attribute hana_<sid>_glob_srHook is replaced by site-aware attributes hana_<sid>_site_srHook_<SITE>. The new attributes might stay empty until HANA raises srConnectionChanged() events and triggers the new hook script for the first time. Further, the HANA global configuration and Linux sudoers permissions are adapted. New auxiliary SUSE HA cluster attributes are introduced. The sections below and manual page SAPHanaSR-manageAttr(8) give more details.

Which prerequisites are needed for upgrading to multi-target srHook attributes?

For successful and smooth upgrade, you need the following prerequisites:

  • SAP HANA supports multi-target system replication and HA/DR provider.
  • All cluster nodes are online in the cluster and there are no current errors in the cluster or HANA.
  • Package SAPHanaSR-ScaleOut of identical new version installed on all nodes, including majority maker.
  • Resource agents SAPHanaController and SAPHanaTopology new and identical on all nodes, including majority maker.
  • HADR provider hook script SAPHanaSrMultiTarget.py new and identical on all nodes, including majority maker.
  • Sufficient sudoers permission on all nodes, including majority maker.
  • Correct and identical entries for HA/DR provider in global.ini at both sites
  • During upgrade the resources SAPHanaController and SAPHanaTopology need to be set into maintenance. HANA needs to reload its HA/DR provider configuration and hook scripts.
  • The procedure has been successfully tested on a test system before applying it to the production system.

The upgrade will remove the global srHook attribute. Instead it will introduce site-specific attributes. Values for srHook attributes are written only on HANA srConnectionChanged() event. So the new attribute for HANA secondary site might stay empty in case HANA is not reloaded after the upgrade. In that case the polling attribute sync_state represents HANA’s system replication status as fallback.

SAPHanaSR-manageAttr will always check prerequisites before changing the CIB attribute from one common hana_<sid>_glob_srHook to site-sepcific attributes hana_<sid>_site_srHook_<SITE>. By calling “SAPHanaSR-manageAttr –check …” you can run that built-in check before trying an upgrade. Doing so is a good idea. Manual pages SAPHanaSR-manageAttr(8) and SAPHanaSrMultiTarget.py(7) contain more details on checking upgrade prerequisites.

Note: SAPHanaSR-manageAttr might report an error in case the sudoers permission is more generic than needed. This might be, for example “hana_<sid>_*” instead of “hana_<sid>_site_srHook *”. That error message is irritating. However, it should not affect the effective upgrade. So the message might change in later versions.

What exactly are the tasks I need to do?

Obviously you need to implement all the steps and checks outlined earlier. The needed exact commands and arguments are depending on the specific environment. Further it is a good idea to document the initial and final status of the system. Also writing down the complete procedure details as run book is good idea. This run book should include all commands with its exact parameters and expected results. Even better is preparing executable shell scripts. If no test system is available, please set it up first. Rehearsing the upgrade procedure on a test cluster before applying on production is mandatory.

We can not write down a complete upgrade procedure that works for all customer environments. Instead we give detailed examples for some important tasks:

  • Check and document status of SUSE HA cluster and HANA system replication (step 1)
  • Upgrade SUSE HA cluster srHook attribute from old-style to multi-target (step 7)
  • Set resources SAPHanaController and SAPHanaTopology back from maintenance into managed mode (step 9)

You will find the selected examples in detail in our blog SAP HANA Scale-Out Upgrade Details . Also the manual pages SAPHanaSR-manageAttr(8), SAPHanaSrMultiTarget.py(7), SAPHanaSR-showAttr(8) and SAPHanaSR_maintenance_examples(7) are showing examples.

Where can I find further information?

Please have a look at the reference part of this blog series (link will follow soon).

– Related blog articles

https://www.suse.com/c/tag/towardszerodowntime

– Product documentation

https://documentation.suse.com/

https://www.suse.com/releasenotes/

– Manual pages
SAPHanaSR-manageAttr(8), SAPHanaSR-ScaleOut(7), ocf_suse_SAPHanaController(7), SAPHanaSrMultiTarget.py(7), SAPHanaSR-ScaleOut_basic_cluster(7), SAPHanaSR_maintenance_examples(7), SAPHanaSR-showAttr(8), ha_related_suse_tids(7), crm(8), crm_attribute(8), crm_mon(8), sudo(8), cs_wait_for_idle(8),
cs_clusterstate(8)

What to take with?

  • You can upgrade existing SUSE HA clusters for HANA scale-out to support multi-target instead of old-style system replication.
  • The upgrade will change cluster resource agents and internal attributes as well as the HANA HA/DR provider script.
  • You may use an regular software maintenance window to prepare the cluster for multi-target business needs.
  • Rehearsing the upgrade procedure on a test cluster before applying on production is mandatory.

Octopod Episode 1: What is an Open Source Community?

Sunday, 1 August, 2021

In Episode 1 of the OCTOpod, Alan Clark talks with Thierry Carrez about open source communities: what they are, how they work and how you can get involved.

Trying to define what an open source community is might sound like a simple task, but it is a layered, nuanced collective with many moving parts. Thierry has been in the open source community for years and is currently the VP of engineering at the Open Infrastructure Foundation. In this episode, Thierry sheds light on some of the key traits that characterize open source communities. We hear about the importance of governance, principles, scope and documentation and find out how everyone, even those who do not code, can contribute. As Thierry notes, it is not about your technical ability, but rather about adding value where you can and being an engaged member of a community. Building a sustainable community requires effort, but that transparency and collaboration make it a worthwhile endeavor.

“It’s really not about code, it’s really not about being a technical rock star. It is really more about being useful to others.”

Listen to the OCTOpod here or subscribe on your favorite podcast platform! And please share it with your friends!

Image 01

Here’s the full transcript:

EPISODE 01

[INTRODUCTION]

AC: I am Alan Clark. I have spent my career in enterprise software with the focus on open source advocacy and emerging tech. These days, I’m a member of the SUSE Office of the CTO, that is OCTO for short. Welcome to our new podcast series, The OCTOPod.

Season one is all about open source. I love being part of open-source communities. I’ve contributed in many ways, from code to chairs, for networking, to cloud. This includes serving as chairman of the board for the Open Infrastructure Foundation, the Linux foundation board of directors, Open SUSE chair, open mainframe project board and many more. I’ve met so many great people along the way.

In season one, I’ll sit down with a few of these experts. We’ll talk about the latest trends and challenges in open source, including findings from our latest report on why IT leaders choose open. We’ll talk about how to manage a community, the importance of diversity and inclusion in open source and much more.

Join me on your favorite podcast platform or at community.suse.com.

[INTERVIEW]

AC: Hello everyone, welcome to the OCTOPod. Today, I’m excited to sit down with Thierry Carrez, someone that I’ve known in the open source community for many years. We’ve worked together for a long time. He is currently the VP of engineering at the Open Infrastructure Foundation.

Thanks for being here today, Thierry. We want to get started here with some questions and we want to talk a little bit about just the basics of open source and open source communities, how they get started, what they’re like and so forth. Just to get people a flavor of how they kind of operate. Let’s start with the real basic question. What exactly is an open source community and what is it not?

TC: Thank you Alan, it’s great to be here. It sounds like a basic question but it’s actually a complex question. An open source community at the very bottom is all the people who contributes to an open source project but obviously, that just kicks the can down the road and now the question is, what is a contribution?

The traditional sense, the contribution to an open source project would be code and code patches but that quickly extended to non-code activities like documentation or user experience studies or walking on the continuous integration for the project and that’s as a result of more using it for tracking everything, not just code but also documentation and other types of documents and infrastructures to us code and those things but sharing your experience is also a form of contribution.

In the end, the community extends to all the users who publicly engage and share their experience and so in the end, the community is all the people who actively engage with the open source project and help it.

Obviously, that definition is working well for open leader block projects where anyone can engage with the project, it works less well for single vendor or open source where the makers are more separated from the consumers and in that case, they call community like more their extended circle of users and conference attendees so it’s not exactly the same meaning as what we call community in a more openly developed project.

AC: That’s a good points. I want to come back to that one because I think that’s a very good point and want to delve into that a little bit but let’s start at a different angle a little bit here. What is it that you see that brings people to participate in this communities? As you mentioned, there’s a lot of different types of contributions which means, we have a lot of different types of backgrounds and experience and interest. What is it that brings people to come and participate in a community?

TC: I would say they are like two categories of things. There is more classic altruistic motivation like giving back to the project that you’re using or participating in cultivating the commons for resource that you are benefiting from but more and more, we are seeing business sense in the form of shared innovation like a multiple organizations, putting resources together so that they don’t waste energy or inventing the wheel separately, that’s what we saw with the open stack project.

A number of organizations coming together because walking on the same body of code and software in common was better than walking on it separately. For any type of complex technology, if you can join a group of experts having the same kind of issues, you learn a lot from it so that makes complete business sense to engage with the community when you’re tackling a complex problem or seeing it for example, with the large scale [SIG 0:05:21.2] within open stack where several operators of large scale clouds get together to share their experiences, obviously, the project benefits from it because we learn from their experience but they also learn from one another and they see benefit in sharing their experience in that group.

It’s really a complex set of motivations but at the bottom, it’s either altruistic based on your usage and wanting to give back or it makes business sense, which is much more sustainable by the way because then it’s a win-win. If everyone wins, there is no sense of the project benefits from having those organizations involved and this organizations see the value of contributing.

AC: Yeah, that makes sense, right? I have to – reminiscing here a little bit, I remember one of the first times I met you, I walked in to I think it was a Nova project meeting, right? Where this was years ago and it was a planning meeting, so planning for the next six months kind of notion.

I was just overwhelmed with the number of people that were in the room at the time. I wouldn’t even dare count but there had to be hundreds of people in that room interested in wanting to participate and contribute to that project.

I remember sitting there to that point, I’ve worked in open source for a long, long time but I had never worked in a project with that many people involved. I was extremely impressed with how you handled the group, able to hear all the voices in the room and enable people to contribute and participate, this is the interesting part that I wanted to ask you.

How does an open-source community work, particularly when you have a large group of people that want to participate? What are the rules, how do you set rules of engagement, so forth, that enables these people to participate, feel like they can participate and contribute and yet when you have a very large group like that, how do you get anything done?

TC: It’s a complex question.

AC: I know it’s a very complex question, I apologize. I might have to break that one down but I was just so impressed because work happened, right? I was totally impressed with how much work was able to get done and how much people – even new people were able to come in and participate in the project.

TC: You have to balance a number of structural elements and allow for a lot of flexibility. Essentially, you have to provide a structure where people would be able to share and at the same time, make it very welcoming so that people feel like they can engage and at the same time, giving – still having a lot of flexibility in terms of the topics that are being discussed or the next steps.

The way we’ve been doing it in our design summits, which we’re referring to the event that you mentioned earlier, those designs summit, the idea was to have anyone be able to join and informs the future of the software and it was based on even to developer summits originally and then we perfected the idea in open stack where we have a theme that is being discussed so there is a first a call for organizing the themes and then every 40 minutes we would switch or 50 minutes we would switch.

During that time, we would discuss, openly discuss that topic with either pads to take notes and fish bowl-type setting where people that are most involved in the discussion sit in the middle but at the same time, you can have like extending circles of people depending on how much they want to get involved and people move in the room and get more involved as the discussion goes.

That provides this structure in which people feel free to communicate and at the same time, a lot of flexibility as to where the discussion goes, that helps with getting that setup. As you said, it’s probably a problem you have once you reach a certain size. In terms of rules of engagement or principles or charters that you have to predefine before you start, I would say, you need three things.

The first one is really to define the scope, what is the problem space your project wants to address and make that very clear from their zero because without scope, you’re really exposed to scope creep and that might – Ultimately, that lack of focus might ultimately kill your projects.

It’s actually one thing we didn’t do well in open stack which is to set a very aggressive scope so that we don’t – just because we are a community, it doesn’t mean that we should address every problem earth practically.

The second one is like the big principles, the big 10x that you want your community to follow and write those down so that it’s really clear to whoever joins the community, what they’re signing up for and finally, governance, which describes how decisions are made, governance is really needed in any social group like absence of rules is in itself a form of governance called anarchy and there is the benevolent dictator model where all decisions go up to one person.

You need to define the governance and you need to do it before any problems arise because if you wait for the problem to arrive to have the rule on how to solve it then it’s a bit too late.

AC: Too late, isn’t it?

TC: People will discuss forever. They can be simple but in the end, it really needs to be clear where the bucket stops and avoid gray areas and what we’ve seen in OpenStack at least and in other projects since is that usually, writing things down in advance avoids the situation that the rule is designed to address.

Sometimes just saying, “Well, this doesn’t get solved at that level, this gets escalated to that level for resolution” then it forces in a way, the first group to come to terms and not escalate because they don’t want to escalate basically. They don’t want the situation out of their hands. They usually work it out between themselves without needing to call out for the upper governance body.

AC: Cool, thank you, that was good. Hey, so, we’re going to run out of time here pretty quick but I wanted to get in for this audience, we have a lot of folks that have not participated in a community and they aren’t sure how to get started, right? It can be very intimidating.

Just maybe very basically, how can someone that perhaps hasn’t been involved with open source in the past and their interest maybe some of those contributions that you talked about earlier that are things beyond writing code? If someone’s time is somewhat limited, can they get involved in a community? How would they begin?

TC: We touched on that earlier when we discussed what is a community but even if you don’t write code or if your time is limited, you can definitely participate in and be part of a community, especially like just joining the conference and participating in the discussions and finding a presentation or those are all contributions that are extremely worthwhile because otherwise, you end up with the same speakers every conference, those who are comfortable speaking.

It is really good to have that people feel empowered to do that and teaches like documentation, people that use project, they’re probably not as issues with the documentation as they first try to run it, so talking on documentation is really an easy way to get involved. Showing your experience like I said, we have this example recently where we have interns on the outreach program in open stack where we pair them with a mentor and experienced developer and they walk together on some specific project.

One that outreaching intern did is documenting her full experience of this onboarding on blog posts but also on TikTok posts or on every social media and it was extremely useful for us to hear how difficult or how easy it is to pass some of the hurdles that we throw our newest contributors on. Even that like doing a quick write up of how you handle those first step of contributions is extremely valuable to a project. There shouldn’t be like extremely high expectations and the bar is not high. Even the simplest contribution, just hearing it from a diversity of perspective is really useful.

AC: Okay, that’s cool. One last topic before we have to go here and this one maybe too deep, might have to deal on another day but I thought it would be interesting because I know you’ve joined or started projects from the beginning, right? If I have something that I think would be very interesting to start an open source community or start a new project in a community, is that something that somebody should be able to do today? Any advice on how someone would start a new project?

TC: Yeah, sure.

AC: Like I said, that is a big question, isn’t it?

TC: Yeah, that’s like the topic for a whole new episode but I’ll try to make it quick. In terms of creation, I would say today it’s really easy to setup an open source project compared to even ten years ago. It’s really easy to setup shop, you just take a force like GitHub or GitLab or OpenDev that we are using for OpenStack and so it is easy to do it like whether you should do it or not is another good topic.

AC: A whole question, isn’t it?

TC: Yeah, I guess the key question is whether several people or several organizations have the same issue and would benefit from sharing the solution because ultimately for me, the interest in having an open source project is ultimately to avoid the waste of having several parties develop the same thing proprietary on their side while they could collaborate and contribute and avoid wasting that energy by doing it as a collaborative project in open source.

Which is actually why I’m so motivated by open lead develop open source because I don’t really see the point of open source that is owned by a single body because then, you don’t really have that collaboration that reduces waste. It is just one way to do proprietary software where you just publish a code and have some free labor in and on the side. Ultimately, for me what matters is whether multiple people have the same problem and then yes, there is potential for an open source project and then setting it up is not the most difficult part. It used to be but it is not the most difficult part today.

AC: That’s true. That is a good point, so thank you for that, I like your response. All right Thierry, so we wanted to circle back a little bit about how community works and I’ll call them the levels of openness that can be done because some communities are much more directed than others and you know, as we’ve worked together for over these several years, I really like the notion of what we call the four opens.

Could you talk to us a little bit about that and about how that opens up a community and enables a lot of communication and I think a lot more contributions, so give us a little bit of flavor on what we call the Four Opens?

TC: Sure, so like we previously talked, we mentioned rules of engagement and I said that we need to define scope, principles and governance and the four opens would be an example of principles. Those are the principles that the open stack community was built on. The four opens are open source, because back then there weren’t any openly developed open source project that was not open core that would do a cloud software.

It was a way of saying that will do we will open source, not open core. There won’t be a proprietary edition of the product, we will not keep some features or to sell a proprietary edition of whatever, everything should be open source. There is open development, which seems really obvious now because like every open source project on earth is on some open GitForce somewhere that you can see what is happening but 12 years ago or 11 years ago when we started OpenStack, there weren’t anything like it.

Open development is about being able to see what’s happening in development transparently so all the patches, all the reeves, all of the issues or all of the discussions everything should be accessible and transparent without needing people to register or anything to see it happening, so transparency in development.

The third one, which was collectively known as well and we touched on it when we discussed design summits is open design. The fact that development is not done behind closed doors by an elite group of core developers, the design is discussed in the open engaging with the users during those open events that we are throwing and that model was replicated in other successful projects like Kubernetes for example.

Finally, open community. Open community is the idea that anyone can join the community and anyone can become a leader in that community. There is no pay to play, there is no like an enforcement that the leaders, the technical leaders from the project out coming from one of the major sponsors or it is completely disconnected. Technical governance is completely disconnected from any other foundation governance or anything.

It is really one contributor, one vote and you end up with elections and the most respected contributors get elected to the leadership buddies for the project. With those four opens, you actually have a very sustainable community because you really empower your community to participate. There is no place that they can’t see, there is no feature that they can’t use, there is no discussion that they can’t participate in and there is no level of leadership that they can’t attain and I feel like it’s been instrumental in the success of open stack.

It is also instrumental in the success of other communities that I’ve adopted them, if not in the letter, in the spirit and so I feel like it is a good model.

AC: I’m glad you pointed that out. Back when we first stated those Four Opens you’re right, they seemed almost revolutionary in some sense and they have become very adopted in many of the communities that I have participated in over these last years and to me that just says they work and that’s why I really like them, so thank you for elaborating on those. I want to go to that fourth one there, a little bit about open community and where things like the technical boards and so forth are elected not just appointed.

That kind of alludes that things are based on reputation, right? Your merits are earned in a community, so any advice on dos and don’ts on how a person would build a reputation in an open source community to help them be a strength particularly in that open community portion?

TC: Yes, you are right that if you have open elections it can turn into a popularity contests really quickly and then reputation is important. People think that you need to be a technical rock star to get to the level of reputation that will let you be elected as a leader for a project but it is actually not really true. From the things you can do, making yourself useful to others is really the key. Do the things that nobody else does.

Everyone is grateful for that to cover all that blind spot that nobody else is covering, you become really well-known across all of the community including in extremely large communities by doing those things that nobody else is doing and then you can leverage that reputation to get elected to the leadership positions like I said or you can influence decisions because nobody wants to piss off the person that actually does the things they don’t want to do.

That’s actually how I started in most of the communities I got involved in like when I joined Gentu in 2000, I ended up documenting security because it wasn’t documented the way I wanted to be. Clearly, documenting the security processes was not high on everyone’s list and by doing that, I earned a good reputation. I ended up leading the security team there, I ended up elected to the Gentu board of directors. It is really a theme and in open stack, I basically did the same.

I started with waste management, which was a non-development task again by documenting security and I ended up being elected to the technical committee for four years and ended up as a leader for that community by starting with a non-code contribution. It’s really not about code, it’s really not about being a technical rock star. It is really more about being useful to others.

AC: That’s great.

TC: In terms of don’t, don’t put in things you should not do, I would say you shouldn’t assume malicious intent because 99% of the cases in those communities, people try to do good and what is seen as potential malicious intent is actually breaks down to communication problems in the end and 99% of the time. It is really key to not jump to conclusions, give people a chance to voice their side of the story rather than packed in haste and make for a not very welcoming community and as a result.

AC: Well, thank you Thierry. This has been very, very interesting and very educational. I have learned some stuff as well and it reminded me a lot of good stuff. Thank you very much for helping us out today and joining us in this podcast. I very much appreciate it.

TC: Well, thanks Alan for the invitation.

AC: Thierry, this has been great.

[END OF INTERVIEW]

AC: For more information, check out community.suse.com and make sure to subscribe to the OCTOpod on your favorite podcast platform.

[END]

Category: Featured Content, Rancher Kubernetes Comments closed

SUSE Manager and Ansible: Making Automation Easier and More Powerful

Thursday, 22 July, 2021

Configuration and automation platforms have become increasingly important to control an organization’s ever-growing IT landscape. There are a variety of popular tools in the market and companies may have already made investments in a particular tool, one of them being Ansible.

Adopting SUSE Manager, or migrating to it, does not mean that you should necessarily renounce your previous configuration management systems investment. SUSE Manager 4.2 provides support for Ansible packages and connects to Ansible Tower to onboard clients and manage them with SUSE Manager. This means you do not have to re-implement your Ansible automation solution. SUSE Manager 4.2 allows you to simply re-use and run your Ansible playbooks. Saving time and resources by consolidating tools while keeping existing automation investments

Ansible is an incredibly powerful provisioning, configuration, and deployment tool, one that many IT admins and businesses would struggle without. But when you combine the power of Ansible with the automation and lifecycle management abilities of SUSE Manager, there’s almost nothing you cannot do for your IT landscape. If your developers and administrators are already familiar with Ansible, you will be doing your business a major service by upping their game with the added benefits of SUSE Manager. And with the Ansible integration, you can harness the declarative nature of Salt and the imperative nature of Ansible.

To learn more about the new SUSE Manager for Ansible integration check out …….