Simplicity z najwyższym poziomem partnerstwa SUSE

Tuesday, 7 March, 2023

Z przyjemnością informujemy, że parter SUSE – firma  Simplicity sp. z o.o. – uzyskała najwyższy poziom partnerstwa, czyli SUSE Diamond Business Partner.  Wymagało to zdobycia odpowiednich kwalifikacji technicznych i handlowych wymaganych w programie partnerskim SUSE One Partner Program.

Simplicity zdobyło najwyższy, diamentowy poziom  partnerstwa już po ośmiu miesiącach współpracy. Wspólne działania rozpoczęliśmy w maju 2022 r. i już po miesiącu Simplicity uzyskało poziomu partnerstwa na poziomie Gold. Po siedmiu kolejnych miesiącach wspólnej działalności nasz partner mógł już pochwalić się tytułem SUSE Diamond Business Partner, a więc najwyższym możliwym stopniem partnerstwa.

“Stało się to możliwe dzięki zbudowaniu silnego zespołu technicznego zdolnego do realizacji najbardziej zaawansowanych projektów oraz stworzonemu od podstaw HUB’owi technologicznemu działającemu w oparciu o chmurę SimpliCloud gdzie nasi klienci mogą szybko przetestować rozwiązania bazujące na katalogu produktów SUSE bez angażowania swoich zasobów IT” – tłumaczy Adam Sokołowski, Business Development Manager odpowiedzialny za współpracę z SUSE.

Dyrektor działu technicznego Simplicity – Adam Miklewski, dodaje – “Dzięki naszym inwestycjom w kompetencje zespołu technicznego oraz  sprzedażowego w krótkim czasie udało nam się wspólnie z polskim oddziałem SUSE zrealizować jeden z największych w kraju projektów budowy środowiska kontenerowego bazującego na platformie SUSE Rancher Prime oraz pierwsze wdrożenie platformy bezpieczeństwa dla środowisk kontenerowych – SUSE NeuVector”. 

Choć Simplicity rozpoczęło współpracę z SUSE na początku 2022 roku, zdążyło już wyraźnie zbudować zaufanie na rynku wśród klientów zwłaszcza w obszarach związanych z szeroko pojętą konteneryzacją, DevOps i DevSecOps, CI/CD czy zarządzaniem infrastrukturą bazującą na rozwiązaniach Open Source. Ukoronowaniem tych działań było przyznanie firmie Simplicity tytułu Partnera Roku 2022 firmy SUSE” – stwierdził Maciej Skrzypek, Channel Manager SUSE Polska.

Simplicity sp. z o.o. jest integratorem systemów informatycznych pomagającym dostosować rozwiązania do indywidualnych wymagań i potrzeb klientów. Firma koncentruje się na rozwoju kompetencji własnych oraz klientów poprzez edukację w zakresie infrastruktury IT, automatyzacji, DevOps, rozwiązań chmurowych oraz cyberbezpieczeństwa.

G2 Ranks SUSE in Top 25 German Companies

Wednesday, 8 February, 2023

I am thrilled to announce that SUSE has been recognized by G2, the world’s largest and most trusted software marketplace, as one of the Top 25 German Companies in their “Best Software Awards” for 2023.

At SUSE, we have always been dedicated to providing our customers with the best possible software solutions and services. This award by G2 is a testament to the hard work and dedication of our entire team. It is also a recognition of the trust and confidence that our customers have placed in us.

This is not the first time G2 has recognized SUSE for delivering excellence to our customers. G2 recently awarded SUSE 15 badges across its product portfolio.

 

Here’s what some of our German customers say about how SUSE’s products have impacted their business:

“To exploit the great potential for innovation in agriculture, our IT must be able to operate with agility. SUSE solutions help us deliver new digital services quickly — without compromising stability and availability.”
Jan Ove Steppat
Open Source Infrastructure Architect
CLAAS KGaA mbH 

“Rancher Prime brings all the functionality we need to deploy, manage and monitor Kubernetes clusters from a central interface, and it’s completely automated. Using OKD, on the other hand, would have required an entire ecosystem of additional solutions, adding further cost and complexity.”
Ronny Becker
Product Owner Platforms
R+V 

“In the last 12 months, we have achieved an availability of exactly 99.99878% for the SAP HANA environment with our platform and have thus been able to support our global business very reliably even in this challenging year. In terms of availability, we thus far exceed the service level agreements that an external service provider could assure us.”
David Kaiser
SAP Manager
REHAU Industries SE & Co 

“From our point of view, Rancher Prime is clearly the most advanced and comprehensive management tool for managing multiple Kubernetes clusters, especially in an environment with high security requirements.”
Frank Bayer
Senior Architect for Operating Systems and Container Services
IT System House, Federal Employment Agency (Bundesagentur für Arbeit)

 

We are grateful to the open source communities and to our employees who work tirelessly every day to make our company a success. A big thank you to our customers, who provided us with valuable feedback and reviews to help us continually improve our product solutions.

I’m excited about the future, and at SUSE we look forward to cooperating with you for many years to come. Thank you again, and here’s to another successful year.

Using Hyperconverged Infrastructure for Kubernetes

Tuesday, 7 February, 2023

Companies face multiple challenges when migrating their applications and services to the cloud, and one of them is infrastructure management.

The ideal scenario would be that all workloads could be containerized. In that case, the organization could use a Kubernetes-based service, like Amazon Web Services (AWS), Google Cloud or Azure, to deploy and manage applications, services and storage in a cloud native environment.

Unfortunately, this scenario isn’t always possible. Some legacy applications are either very difficult or very expensive to migrate to a microservices architecture, so running them on virtual machines (VMs) is often the best solution.

Considering the current trend of adopting multicloud and hybrid environments, managing additional infrastructure just for VMs is not optimal. This is where a hyperconverged infrastructure (HCI) can help. Simply put, HCI enables organizations to quickly deploy, manage and scale their workloads by virtualizing all the components that make up the on-premises infrastructure.

That being said, not all HCI solutions are created equal. In this article, you’ll learn more about what an HCI is and then explore Harvester, an enterprise-grade HCI software that offers you unique flexibility and convenience when managing your infrastructure.

What is HCI?

Hyperconverged infrastructure (HCI) is a type of data center infrastructure that virtualizes computing, storage and networking elements in a single system through a hypervisor.

Since virtualized abstractions managed by a hypervisor replaces all physical hardware components (computing, storage and networking), an HCI offers benefits, including the following:

  • Easier configuration, deployment and management of workloads.
  • Convenience since software-defined data centers (SDDCs) can also be easily deployed.
  • Greater scalability with the integration of more nodes to the HCI.
  • Tight integration of virtualized components, resulting in fewer inefficiencies and lower total cost of ownership (TCO).

However, the ease of management and the lower TCO of an HCI approach come with some drawbacks, including the following:

  • Risk of vendor lock-in when using closed-source HCI platforms.
  • Most HCI solutions force all resources to be increased in order to increase any single resource. That is, new nodes add more computing, storage and networking resources to the infrastructure.
  • You can’t combine HCI nodes from different vendors, which aggravates the risk of vendor lock-in described previously.

Now that you know what HCI is, it’s time to learn more about Harvester and how it can alleviate the limitations of HCI.

What is Harvester?

According to the Harvester website, “Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Kubernetes, KubeVirt and Longhorn.” Harvester is an ideal solution for those seeking a Cloud native HCI offering — one that is both cost-effective and able to place VM workloads on the edge, driving IoT integration into cloud infrastructure.

Because Harvester is open source, this automatically means you don’t have to worry about vendor lock-in. Furthermore, since it’s built on top of Kubernetes, Harvester offers incredible scalability, flexibility and reliability.

Additionally, Harvester provides a comprehensive set of features and capabilities that make it the ideal solution for deploying and managing enterprise applications and services. Among these characteristics, the following stand out:

  • Built on top of Kubernetes.
  • Full VM lifecycle management, thanks to KubeVirt.
  • Support for VM cloud-init templates.
  • VM live migration support.
  • VM backup, snapshot and restore capabilities.
  • Distributed block storage and storage tiering, thanks to Longhorn.
  • Powerful monitoring and logging since Harvester uses Grafana and Prometheus as its observability backend.
  • Seamless integration with Rancher, facilitating multicluster deployments as well as deploying and managing VMs and Kubernetes workloads from a centralized dashboard.

Harvester architectural diagram courtesy of Damaso Sanoja

Now that you know about some of Harvester’s basic features, let’s take a more in-depth look at some of the more prominent features.

How Rancher and Harvester can help with Kubernetes deployments on HCI

Managing multicluster and hybrid-cloud environments can be intimidating when you consider how complex it can be to monitor infrastructure, manage user permissions and avoid vendor lock-in, just to name a few challenges. In the following sections, you’ll see how Harvester, or more specifically, the synergy between Harvester and Rancher, can make life easier for ITOps and DevOps teams.

Straightforward installation

There is no one-size-fits-all approach to deploying an HCI solution. Some vendors sacrifice features in favor of ease of installation, while others require a complex installation process that includes setting up each HCI layer separately.

However, with Harvester, this is not the case. From the beginning, Harvester was built with ease of installation in mind without making any compromises in terms of scalability, reliability, features or manageability.

To do this, Harvester treats each node as an HCI appliance. This means that when you install Harvester on a bare-metal server, behind the scenes, what actually happens is that a simplified version of SLE Linux is installed, on top of which Kubernetes, KubeVirt, Longhorn, Multus and the other components that make up Harvester are installed and configured with minimal effort on your part. In fact, the manual installation process is no different from that of a modern Linux distribution, save for a few notable exceptions:

  • Installation mode: Early on in the installation process, you will need to choose between creating a new cluster (in which case the current node becomes the management node) or joining an existing Harvester cluster. This makes sense since you’re actually setting up a Kubernetes cluster.
  • Virtual IP: During the installation, you will also need to set an IP address from which you can access the main node of the cluster (or join other nodes to the cluster).
  • Cluster token: Finally, you should choose a cluster token that will be used to add new nodes to the cluster.

When it comes to installation media, you have two options for deploying Harvester:

It should be noted that, regardless of the deployment method, you can use a Harvester configuration file to provide various settings. This makes it even easier to automate the installation process and enforce the infrastructure as code (IaC) philosophy, which you’ll learn more about later on.

For your reference, the following is what a typical configuration file looks like (taken from the official documentation):

scheme_version: 1
server_url: https://cluster-VIP:443
token: TOKEN_VALUE
os:
  ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB...
    - github:username
  write_files:
  - encoding: ""
    content: test content
    owner: root
    path: /etc/test.txt
    permissions: '0755'
  hostname: myhost
  modules:
    - kvm
    - nvme
  sysctls:
    kernel.printk: "4 4 1 7"
    kernel.kptr_restrict: "1"
  dns_nameservers:
    - 8.8.8.8
    - 1.1.1.1
  ntp_servers:
    - 0.suse.pool.ntp.org
    - 1.suse.pool.ntp.org
  password: rancher
  environment:
    http_proxy: http://myserver
    https_proxy: http://myserver
  labels:
    topology.kubernetes.io/zone: zone1
    foo: bar
    mylabel: myvalue
install:
  mode: create
  management_interface:
    interfaces:
    - name: ens5
      hwAddr: "B8:CA:3A:6A:64:7C"
    method: dhcp
  force_efi: true
  device: /dev/vda
  silent: true
  iso_url: http://myserver/test.iso
  poweroff: true
  no_format: true
  debug: true
  tty: ttyS0
  vip: 10.10.0.19
  vip_hw_addr: 52:54:00:ec:0e:0b
  vip_mode: dhcp
  force_mbr: false
system_settings:
  auto-disk-provision-paths: ""

All in all, Harvester offers a straightforward installation on bare-metal servers. What’s more, out of the box, Harvester offers powerful capabilities, including a convenient host management dashboard (more on that later).

Host management

Nodes, or hosts, as they are called in Harvester, are the heart of any HCI infrastructure. As discussed, each host provides the computing, storage and networking resources used by the HCI cluster. In this sense, Harvester provides a modern UI that gives your team a quick overview of each host’s status, name, IP address, CPU usage, memory, disks and more. Additionally, your team can perform all kinds of routine operations intuitively just by right-clicking on each host’s hamburger menu:

  • Node maintenance: This is handy when your team needs to remove a node from the cluster for a long time for maintenance or replacement. Once the node enters the maintenance node, all VMs are automatically distributed across the rest of the active nodes. This eliminates the need to live migrate VMs separately.
  • Cordoning a node: When you cordon a node, it’s marked as “unschedulable,” which is useful for quick tasks like reboots and OS upgrades.
  • Deleting a node: This permanently removes the node from the cluster.
  • Multi-disk management: This allows adding additional disks to a node as well as assigning storage tags. The latter is useful to allow only certain nodes or disks to be used for storing Longhorn volume data.
  • KSMtuned mode management: In addition to the features described earlier, Harvester allows your team to tune the use of kernel same-page merging (KSM) as it deploys the KSM Tuning Service ksmtuned on each node as a DaemonSet.

To learn more on how to manage the run strategy and threshold coefficient of ksmtuned, as well as more details on the other host management features described, check out this documentation.

As you can see, managing nodes through the Harvester UI is really simple. However, your ops team will spend most of their time managing VMs, which you’ll learn more about next.

VM management

Harvester was designed with great emphasis on simplifying the management of VMs’ lifecycles. Thanks to this, IT teams can save valuable time when deploying, accessing and monitoring VMs. Following are some of the main features that your team can access from the Harvester Virtual Machines page.

Harvester basic VM management features

As you would expect, the Harvester UI facilitates basic operations, such as creating a VM (including creating Windows VMs), editing VMs and accessing VMs. It’s worth noting that in addition to the usual configuration parameters, such as VM name, disks, networks, CPU and memory, Harvester introduces the concept of the namespace. As you might guess, this additional level of abstraction is made possible by Harvester running on top of Kubernetes. In practical terms, this allows your Ops team to create isolated virtual environments (for example, development and production), which facilitate resource management and security.

Furthermore, Harvester also supports injecting custom cloud-init startup scripts into a VM, which speeds up the deployment of multiple VMs.

Harvester advanced VM management features

Today, any virtualization tool allows the basic management of VMs. In that sense, where enterprise-grade platforms like Harvester stand out from the rest is in their advanced features. These include performing VM backup, snapshot and restoredoing VM live migrationadding hot-plug volumes to running VMs; cloning VMs with volume data; and overcommitting CPU, memory and storage.

While all these features are important, Harvester’s ability to ensure the high availability (HA) of VMs is hands down the most crucial to any modern data center. This feature is available on Harvester clusters with three or more nodes and allows your team to migrate live VMs from one node to another when necessary.

Furthermore, not only is live VM migration useful for maintaining HA, but it is also a handy feature when performing node maintenance when a hardware failure occurs or your team detects a performance drop on one or more nodes. Regarding the latter, performance monitoring, Harvester provides out-of-the-box integration with Grafana and Prometheus.

Built-in monitoring

Prometheus and Grafana are two of the most popular open source observability tools today. They’re highly customizable, powerful and easy to use, making them ideal for monitoring key VMs and host metrics.

Grafana is a data-focused visualization tool that makes it easy to monitor your VM’s performance and health. It can provide near real-time performance metrics, such as CPU and memory usage and disk I/O. It also offers comprehensive dashboards and alerts that are highly configurable. This allows you to customize Grafana to your specific needs and create useful visualizations that can help you quickly identify issues.

Meanwhile, Prometheus is a monitoring and alerting toolkit designed for large-scale, distributed systems. It collects time series data from your VMs and hosts, allowing you to quickly and accurately track different performance metrics. Prometheus also provides alerts when certain conditions have been met, such as when a VM is running low on memory or disk space.

All in all, using Grafana and Prometheus together provide your team with comprehensive observability capabilities by means of detailed graphs and dashboards that can help them to identify why an issue is occurring. This can help you take corrective action more quickly and reduce the impact of any potential issues.

Infrastructure as Code

Infrastructure as code (IaC) has become increasingly important in many organizations because it allows for the automation of IT infrastructure, making it easier to manage and scale. By defining IT infrastructure as code, organizations can manage their VMs, disks and networks more efficiently while also making sure that their infrastructure remains in compliance with the organization’s policies.

With Harvester, users can define their VMs, disks and networks in YAML format, making it easier to manage and version control virtual infrastructure. Furthermore, thanks to the Harvester Terraform provider, DevOps teams can also deploy entire HCI clusters from scratch using IaC best practices.

This allows users to define the infrastructure declaratively, allowing operations teams to work with developer tools and methodologies, helping them become more agile and effective. In turn, this saves time and cost and also enables DevOps teams to deploy new environments or make changes to existing ones more efficiently.

Finally, since Harvester enforces IaC principles, organizations can make sure that their infrastructure remains compliant with security, regulatory and governance policies.

Rancher integration

Up to this point, you’ve learned about key aspects of Harvester, such as its ease of installation, its intuitive UI, its powerful built-in monitoring capabilities and its convenient automation, thanks to IaC support. However, the feature that takes Harvester to the next level is its integration with Rancher, the leading container management tool.

Harvester integration with Rancher allows DevOps teams to manage VMs and Kubernetes workloads from a single control panel. Simply put, Rancher integration enables your organization to combine conventional and Cloud native infrastructure use cases, making it easier to deploy and manage multi-cloud and hybrid environments.

Furthermore, Harvester’s tight integration with Rancher allows your organization to streamline user and system management, allowing for more efficient infrastructure operations. Additionally, user access control can be centralized in order to ensure that the system and its components are protected.

Rancher integration also allows for faster deployment times for applications and services, as well as more efficient monitoring and logging of system activities from a single control plane. This allows DevOps teams to quickly identify and address issues related to system performance, as well as easily detect any security risks.

Overall, Harvester integration with Rancher provides DevOps teams with a comprehensive, centralized system for managing both VMs and containerized workloads. In addition, this approach provides teams with improved convenience, observability and security, making it an ideal solution for DevOps teams looking to optimize their infrastructure operations.

Conclusion

One of the biggest challenges facing companies today is migrating their applications and services to the cloud. In this article, you’ve learned how you can manage Kubernetes and VM-based environments with the aid of Harvester and Rancher, thus facilitating your application modernization journey from monolithic apps to microservices.

Both Rancher and Harvester are part of the rich SUSE ecosystem that helps your business deploy multi-cloud and hybrid-cloud environments easily across any infrastructure. Harvester is an open source HCI solution. Try it for free today.

How To Simplify Your Kubernetes Adoption Using Rancher

Wednesday, 1 February, 2023

Kubernetes has firmly established itself as the leading choice for container orchestration thanks to its robust ecosystem and flexibility, allowing users to scale their workloads easily. However, the complexity of Kubernetes can make it challenging to set up and may pose a significant barrier for organizations looking to adopt cloud native technology and containers as part of their modernization efforts.
 

In this blog post, we’ll look at how Rancher can help infrastructure operators simplify the process of adopting Kubernetes into their ecosystem. We’ll explore how Rancher provides a range of features and tools that make it easier to deploy, manage, and secure containerized applications and Kubernetes clusters.
 

Let’s start analyzing the main challenges for Kubernetes adoption and how Rancher tackles them.   

Challenge #1: Kubernetes is Complex 

One of the main challenges of adopting Kubernetes is the learning curve required to understand the orchestration platform and its implementation. Kubernetes has a large and complex codebase with many moving parts and a rapidly growing ecosystem. This can make it difficult for organizations to get up and running confidently, as these issues can blur the decisions required to determine the needed resources. Kubernetes talent remains difficult to source. Organizations with a preference for in-house, dedicated support may struggle to fill roles and scale the business growth at the speed they wish.
 

Utilizing a Kubernetes Management Platform (KMP) like Rancher can help alleviate some of these resourcing roadblocks by simplifying Kubernetes management and operations. Rancher’s provides a user-friendly web interface for managing Kubernetes clusters and applications, which can be used by developers and operations teams alike, and encourages domain specialists to upskill and transfer knowledge across teams.
 

Rancher also includes graphical cluster management, application templates, and one-click deployments, making it easier to deploy and manage applications hosted on Kubernetes and encouraging teams to utilize templatized processes to avoid over-complicating deployments. Rancher also has several built-in tools and integrations, such as monitoring, logging, and alerting, which can help teams get insights into their Kubernetes deployments faster.   

Challenge #2: Lack of Integration with Existing Tools and Workflows   

Another challenge of adopting Kubernetes is integrating an organization’s existing tools and workflows. Many teams already have various tools and processes to manage their applications and infrastructure, and introducing a new platform like Kubernetes can often disrupt these established processes.  

However, choosing a KMP like Rancher, which out-of-the-box integrates with multiple tools and platforms, from cloud providers to container registries, and continuous integration/continuous deployment (CI/CD) tools, enables organizations to adopt and implement Kubernetes alongside their existing stack. 

Challenge #3: Security is Now Top of Mind   

As more enterprises transition their stack to cloud native, security across Kubernetes environments has become top of mind for them. Kubernetes includes built-in basic security features, such as role-based access control (RBAC) and Pod Security Admission. However, learning to configure these features in addition to your stack’s existing security levels can be a maze at best and potentially expose weaknesses in your environment. Given Kubernetes’ dynamic nature, identifying, analyzing, and mitigating security incidents without the proper tools is a big challenge. 

 Rancher includes several protective features and integrations with security solutions to help organizations fortify their Kubernetes clusters and deployments. These include out-of-the-box support for RBAC, Authentication Proxy, CIS and vulnerability scanning, amongst others.  

 Rancher also provides integration with security-focused solutions, including SUSE NeuVector and Kubewarden.  

 

SUSE Neuvector provides comprehensive container security throughout the entire lifecycle, from development to production. It scans container registries and images and uses behavioral-based zero-trust security policies and advanced Deep Packet Inspection technology to prevent attacks from spreading or reaching the applications at the network level. This enables teams to implement zero-trust practices across their container environments easily. 

 

Kubewarden is a CNCF incubating project that delivers policy-as-code. Leveraging the power of WASM, Kubewarden allows writing security policies in your language of choice (Rego, Rust, Go, Swift, …) and controls policies not just during deployment but also handling mutations and runtime modifications.  

 

Both solutions help users build a better-fortified Kubernetes environment whilst minimizing the operational overhead needed to maintain a productive environment.   

Rancher’s out-of-the-box monitoring and auditing capabilities for Kubernetes clusters and applications help organizations get real-time data to identify and address any potential security issues quickly, reducing operational downtime and preventing substantial impact on an organization’s bottom line.  

In addition to all the products and features, it is crucial to secure and harden our environments properly. Rancher has undergone the DISA certification process for its multi-cluster management solution and the RKE2 Kubernetes distributions, making them the only solutions currently certified in this space. As a result, you can use the DISA-approved STIG guides for Rancher and RKE2 to implement a customized hardening approach for your specific use case.  

Challenge #4: Management and Automation   

As the number of clusters and containerized applications grows, the complexity of automating, configuring, and securing the environments skyrockets. As more organizations choose to modernize with Kubernetes, the reliance on automation, compliance and security of deployments is becoming more critical. Teams need solutions that can help their organization scale safely.
 

Rancher includes Fleet, a continuous delivery tool that helps your organization implement GitOps practices. The benefits of using GitOps in Kubernetes include the following:  

  1. Version Control: Git provides a way to track and manage changes to the cluster’s desired state, making it easy to roll back or revert changes.  
  2. Encourages Collaboration: Git makes it easy for multiple team members to work on the same cluster configuration and review and approve changes before deployment.  
  3. Utilize Automation: By using Git as the source of truth, changes can be automatically propagated to the cluster, reducing the risk of human error.  
  4. Improve Visibility: Git provides an auditable history of changes to the cluster, making it easy to see who made changes, when, and why.   

Conclusion: 

Adopting Kubernetes doesn’t have to be hard. Finding reliable solutions like Rancher can help teams better manage their clusters and applications on Kubernetes. KMP platforms help reduce the entry barrier to adopting Kubernetes and help ease the transition from traditional IT to cloud native architectures. 
 

For Kubernetes users who need additional support and services, there is Rancher Prime – the complete product and support subscription package of Rancher. Enterprises adopting Kubernetes and utilizing Rancher Prime have seen substantial economic benefits, which you can learn more about in Forrester’s ‘Total Economic Impact’ Report on Rancher Prime. 

Challenges and Solutions with Cloud Native Persistent Storage

Wednesday, 18 January, 2023

Persistent storage is essential for any account-driven website. However, in Kubernetes, most resources are ephemeral and unsuitable for keeping data long-term. Regular storage is tied to the container and has a finite life span. Persistent storage has to be separately provisioned and managed.

Making permanent storage work with temporary resources brings challenges that you need to solve if you want to get the most out of your Kubernetes deployments.

In this article, you’ll learn about what’s involved in setting up persistent storage in a cloud native environment. You’ll also see how tools like Longhorn and Rancher can enhance your capabilities, letting you take full control of your resources.

Persistent storage in Kubernetes: challenges and solutions

Kubernetes has become the go-to solution for containers, allowing you to easily deploy scalable sites with a high degree of fault tolerance. In addition, there are many tools to help enhance Kubernetes, including Longhorn and Rancher.

Longhorn is a lightweight block storage system that you can use to provide persistent storage to Kubernetes clusters. Rancher is a container management tool that helps you with the challenges that come with running multiple containers.

You can use Rancher and Longhorn together with Kubernetes to take advantage of both of their feature sets. This gives you reliable persistent storage and better container management tools.

How Kubernetes handles persistent storage

In Kubernetes, files only last as long as the container, and they’re lost if the container crashes. That’s a problem when you need to store data long-term. You can’t afford to lose everything when the container disappears.

Persistent Volumes are the solution to these issues. You can provision them separately from the containers they use and then attach them to containers using a PersistentVolumeClaim, which allows applications to access the storage:

Diagram showing the relationship between container application, its own storage and persistent storage courtesy of James Konik

However, managing how these volumes interact with containers and setting them up to provide the combination of security, performance and scalability you need bring further issues.

Next, you’ll take a look at those issues and how you can solve them.

Security

With storage, security is always a key concern. It’s especially important with persistent storage, which is used for user data and other critical information. You need to make sure the data is only available to those that need to see it and that there’s no other way to access it.

There are a few things you can do to improve security:

Use RBAC to limit access to storage resources

Role-based access control (RBAC) lets you manage permissions easily, granting users permissions according to their role. With it, you can specify exactly who can access storage resources.

Kubernetes provides RBAC management and allows you to assign both Roles, which apply to a specific namespace, and ClusterRoles, which are not namespaced and can be used to give permissions on a cluster-wide basis.

Tools like Rancher also include RBAC support. Rancher’s system is built on top of Kubernetes RBAC, which it uses for enforcement.

With RBAC in place, not only can you control who accesses what, but you can change it easily, too. That’s particularly useful for enterprise software managers who need to manage hundreds of accounts at once. RBAC allows them to control access to your storage layer, defining what is allowed and changing those rules quickly on a role-by-role level.

Use namespaces

Namespaces in Kubernetes allow you to create groups of resources. You can then set up different access control rules and apply them independently to each namespace, giving you extra security.

If you have multiple teams, it’s a good way to stop them from getting in each other’s way. It also keeps its resources private to their namespace.

Namespaces do provide a layer of basic security, compartmentalizing teams and preventing users from accessing what you don’t want them to.

However, from a security perspective, namespaces do have limitations. For example, they don’t actually isolate all the shared resources that the namespaced resources use. That means if an attacker gets escalated privileges, they can access resources on other namespaces served by the same node.

Scalability and performance

Delivering your content quickly provides a better user experience, and maintaining that quality as your traffic increases and decreases adds an additional challenge. There are several techniques to help your apps cope:

Use storage classes for added control

Kubernetes storage classes let you define how your storage is used, and there are various settings you can change. For example, you can choose to make classes expandable. That way, you can get more space if you run out without having to provision a new volume.

Longhorn has its own storage classes to help you control when Persistent Volumes and their containers are created and matched.

Storage classes let you define the relationship between your storage and other resources, and they are an essential way to control your architecture.

Dynamically provision new persistent storage for workloads

It isn’t always clear how much storage a resource will need. Provisioning dynamically, based on that need, allows you to limit what you create to what is required.

You can have your storage wait until a container that uses it is created before it’s provisioned, which avoids the wasted overhead of creating storage that is never used.

Using Rancher with Longhorn’s storage classes lets you provision storage dynamically without having to rely on cloud services.

Optimize storage based on use

Persistent storage volumes have various properties. Their size is an obvious one, but latency and CPU resources also matter.

When creating persistent storage, make sure that the parameters used reflect what you need to use it for. A service that needs to respond quickly, such as a login service, can be optimized for speed.

Using different storage classes for different purposes is easier when using a provider like Longhorn. Longhorn storage classes can specify different disk technologies, such as NVME, SSD, or rotation, and these can be linked to specific nodes allowing you to match storage to your requirements closely.

Stability

Building a stable product means getting the infrastructure right and aggressively looking for errors. That way, your product quality will be as high as possible.

Maximize availability

Outages cost time and money, so avoiding them is an obvious goal.

When they do occur, planning for them is essential. With cloud storage, you can automate reprovisioning of failed volumes to minimize user disruption.

To prevent data loss, you must ensure dynamically provisioned volumes aren’t automatically deleted when a resource is done with them. Kubernetes enables the use protection on volumes, so they aren’t immediately lost.

You can control the behavior of storage volumes by setting the reclaim policy. Picking the retain option lets you manually choose what to do with the data and prevents it from being deleted automatically.

Monitor metrics

As well as challenges, working with cloud volumes also offers advantages. Cloud providers typically include many strong options for monitoring volumes, facilitating a high level of observability.

Rancher makes it easier to monitor Kubernetes clusters. Its built-in Grafana dashboards let you view data for all your resources.

Rancher collects memory and CPU data by default, and you can break this data down by workload using PromQL queries.

For example, if you wanted to know how much data was being read to a disk by a workload, you’d use the following PromQL from Rancher’s documentation:


sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)

Longhorn also offers a detailed selection of metrics for monitoring nodes, volumes, and instances. You can also check on the resource usage of your manager, along with the size and status of backups.

The observability these metrics provide has several uses. You should log any detected errors in as much detail as possible, enabling you to identify and solve problems. You should also monitor performance, perhaps setting alerts if it drops below any particular threshold. The same goes for error logging, which can help you spot issues and resolve them before they become too serious.

Get the infrastructure right for large products

For enterprise-grade products that require fast, reliable distributed block storage, Longhorn is ideal. It provides a highly resilient storage infrastructure. It has features like application-aware snapshots and backups as well as remote replication, meaning you can protect your data at scale.

Longhorn provides enterprise-grade distributed block storage and facilitates deploying a highly resilient storage infrastructure. It lets you provision storage on the major cloud providers, with built-in support for AzureGoogle Cloud Platform (GCP) and Amazon Web Services (AWS).

Longhorn also lets you spread your storage over multiple availability zones (AZs). However, keep in mind that there can be latency issues if volume replicas reside in different regions.

Conclusion

Managing persistent storage is a key challenge when setting up Kubernetes applications. Because Persistent Volumes work differently from regular containers, you need to think carefully about how they interact; how you set things up impacts your application performance, security and scalability.

With the right software, these issues become much easier to handle. With help from tools like Longhorn and Rancher, you can solve many of the problems discussed here. That way, your applications benefit from Kubernetes while letting you keep a permanent data store your other containers can interact with.

SUSE is an open source software company responsible for leading cloud solutions like Rancher and Longhorn. Longhorn is an easy, fast and reliable Cloud native distributed storage platform. Rancher lets you manage your Kubernetes clusters to ensure consistency and security. Together, these and other products are perfect for delivering business-critical solutions.

SUSE Receives 15 Badges in the Winter G2 Report Across its Product Portfolio

Thursday, 12 January, 2023

 

 

 

 

 

I’m pleased to share that G2, the world’s largest and most trusted tech marketplace, has recognized our solutions in its 2023 Winter Report. We received a total of 15 badges across our business units for Rancher, SUSE Linux Enterprise Server (SLES), SLE Desktop and SLE Real Time – including the Users Love Us badge for all products – as well as three badges for the openSUSE community with Leap and Tumbleweed.

We recently celebrated 30 years of service to our customers, partners and the open source communities and it’s wonderful to keep the celebrations going with this recognition by our peers. Receiving 15 badges this quarter reinforces the depth and breadth of our strong product portfolio as well as the dedication that our team provides for our customers.

As the use of hybrid, multi-cloud and cloud native infrastructures grows, many of our customers are looking to containers. For their business success, they look to Rancher, which has been the leading multi-cluster management for nearly a decade and has one of the strongest adoption rates in the industry.

G2 awarded Rancher four badges, including High Performer badges in the Container Management and the Small Business Container Management categories and Most Implementable and Easiest Admin in the Small Business Container Management category.

Tacking on to the latest badges that SLES received in October, SLES received Momentum Leader and Leader in the Server Virtualization category once again; Momentum Leader and High Performer in the Infrastructure as a Service category; and two badges in the Mid-Market Server Virtualization category for Best Support and High Performer.

In addition, SLE Desktop was again awarded two High Performer badges in the Mid-Market Operating System and Operating System categories. SLE Real Time also received a High Performer badge in the Operating System category. The openSUSE community distribution Leap was recognized as the Fastest Implementation in the Operating System category. It’s clear that our Business Critical Linux solutions continue to be the cornerstone of success for many of our customers and that we continue to provide excellent service for the open source community.

Here’s what some of our customers said in their reviews on G2:

“[Rancher is a] complete package for Kubernetes.”

“RBAC simple management is one of the best upsides in Rancher, attaching Rancher post creation process to manage RBAC, ingress and [getting] a simple UI overview of what is going on.”

“ [Rancher is the] best tool for managing multiple production clusters of Kubernetes orchestration. Easy to deploy services, scale and monitor services on multiple clusters.”

“SLES the best [for] SAP environments. The support is fast and terrific.”

Providing our customers with solutions that they know they can rely on and trust is critical to the work we do every day. These badges are a direct response to customer feedback and product reviews and underscore our ability to serve the needs of our customers for all of our solutions. I’m looking forward to seeing what new badges our team will be rewarded in the future as a result of their excellent work.

 

Rancher Wrap: Another Year of Innovation and Growth

Monday, 12 December, 2022

2022 was another year of innovation and growth for SUSE’s Enterprise Container Management business. We introduced significant upgrades to our Rancher and NeuVector products, launched new open source projects and matured others. Exiting 2022, Rancher remains the industry’s most widely adopted container management platform and SUSE remains the preferred vendor for enabling enterprise cloud native transformation. Here’s a quick look at a few key themes from 2022.  

Security Takes Center Stage 

As the container management market matured in 2022, container security took center stage.  Customers and the open source community alike voiced concerns around the risks posed by their increasing reliance on hybrid-cloud, multi-cloud, and edge infrastructure. Beginning with the open sourcing of NeuVector, which we acquired in Q4 2021, in 2022 we continued to meet our customers’ most stringent security and assurance requirements, making strategic investments across our portfolio, including:  

  • Kubewarden – In June, we donated Kubewarden to the CNCF. Now a CNCF sandbox project, Kubewarden is an open source policy engine for Kubernetes that automates the management and governance of policies across Kubernetes clusters thereby reducing risk.  It also simplifies the management of policies by enabling users to integrate policy management into their CI/CD engines and existing infrastructure.  
  • SUSE NeuVector 5.1 – In November, we released SUSE Neuvector 5.1, further strengthening our already industry leading container security platform. 
  • Rancher Prime– Most recently, we introduced Rancher Prime, our new commercial offering, replacing SUSE Rancher.  Supporting our focus on security assurances, Rancher Prime offers customers the option of accessing their Rancher Prime software directly from a trusted private registry. Additionally, Rancher Prime FIPS-140-3 and SLSA Level 2 and 3 certifications will be finalized in 2023.

Open Source Continues to Fuel Innovation 

 Our innovation did not stop at security. In 2022, we also introduced new projects and matured others, including:  

  • Elemental – Fit for Edge deployments, Elemental is an open source project, that enables centralized management and operations of RKE2 and K3s clusters when deployed with Rancher. 
  • Harvester SUSE’s open-source cloud-native hyper-converged infrastructure (HCI) alternative to proprietary HCI is now utilized across more than 710+ active clusters. 
  • Longhorn – now a CNCF incubator project, Longhorn is deployed across more than 72,000 nodes. 
  • K3s – SUSE’s lightweight Kubernetes distribution designed for the edge which we donated to the CNCF, has surpassed 4 million downloads. 
  • Rancher Desktop – SUSE’s desktop-based container development environment for Windows, macOS, and Linux environments has surpassed 520,000 downloads and 4,000 GitHub stars since its January release. 
  • Epinio – SUSE’s Kubernetes-powered application development platform-as-a-service (PaaS) solution in which users you can deploy apps without setting up infrastructure yourself has surpassed 4,000 downloads and 300 stars on GitHub since its introduction in September. 
  • Opni – SUSE’s multi-cluster observability tool (including logging, monitoring and alerting) with AIOps has seen steady growth with over 75+ active deployments this year.  

 As we head into 2023, Gartner research indicates the container management market will grow ~25% CAGR to $1.4B in 2025. In that same time-period, 85% of large enterprises will have adopted container management solutions, up from 30% in 2022.  SUSE’s 30-year heritage in delivering enterprise infrastructure solutions combined with our market leading container management solutions uniquely position SUSE as the vendor of choice for helping organizations on their cloud native transformation journeys.  I can’t wait to see what 2023 holds in store! 

Q&A: How to Find Value at the Edge Featuring Michele Pelino

Tuesday, 6 December, 2022

We recently held a webinar, “Find Value at the Edge: Innovation Opportunities and Use Cases,” where Forrester Principal Analyst Michele Pelino was our guest speaker. After the event, we held a Q&A with Pelino highlighting edge infrastructure solutions and benefits. Here’s a look into the interview: 

SUSE: What technologies (containers, Kubernetes, cloud native, etc.) enable workload affinity in the context of edge? 

Michele: The concept of workload affinity enables firms to deploy software where it runs best. Workload affinity is increasingly important as firms deploy AI code across a variety of specialized chips and networks. As firms explore these new possibilities, running the right workloads in the right locations — cloud, data center, and edge — is critical. Increasingly, firms are embracing cloud native technologies to achieve these deployment synergies. 

Many technologies enable workload affinity for firms — for example, cloud native integration tools and container platforms’ application architecture solutions that enable the benefits of cloud everywhere. Kubernetes, a key open source system, enables enterprises to automate deployment, as well as to scale and manage containerized applications in a cloud native environment. Kubernetes solutions also provide developers with software design, deployment, and portability strategies to extend applications in a seamless, scalable manner. 

SUSE: What are the benefits of using cloud native technology in implementing edge computing solutions? 

Michele: Proactive enterprises are extending applications to the edge by deploying compute, connectivity, storage, and intelligence close to where it’s needed. Cloud native technologies deliver massive scalability, as well as enable performance, resilience, and ease of management for critical applications and business scenarios. In addition, cloud functions can analyze large data sets, identify trends, generate predictive analytics models, and remotely manage data and applications globally. 

Cloud native apps can leverage development principles such as containers and microservices to make edge solutions more dynamic. Applications running at the edge can be developed, iterated, and deployed at an accelerated rate, which reduces the time it takes to launch new features and services. This approach improves end user experience because updates can be made swiftly. In addition, when connections are lost between the edge and the cloud, those applications at the edge remain up to date and functional. 

SUSE: How do you mitigate/address some of the operational challenges in implementing edge computing at scale? 

Michele: Edge solutions make real-time decisions across key operational processes in distributed sites and local geographies. Firms must address key impacts on network operations and infrastructure. It is essential to ensure interoperability of edge computing deployments, which often have different device, infrastructure, and connectivity requirements. Third-party partners can help stakeholders deploy seamless solutions across edge environments, as well as connect to the cloud when appropriate. Data centers in geographically diverse locations make maintenance more difficult and highlight the need for automated and orchestrated management systems spanning various edge environments. 

Other operational issues include assessing data response requirements for edge use cases and the distance between edge environments and computing resources, which impacts response times. Network connectivity issues include evaluating bandwidth limitations and determining processing characteristics at the edge. It is also important to ensure that deployment initiatives enable seamless orchestration and maintenance of edge solutions. Finally, it is important to identify employee expertise to determine skill-set gaps in areas such as mesh networking, software-defined networking (SDN), analytics, and development expertise. 

SUSE: What are some of the must-haves for securing the edge? 

Michele: Thousands of connected edge devices across multiple locations create a fragmented attack surface for hackers, as well as business-wide networking fabrics that interweave business assets, customers, partners, and digital assets connecting the business ecosystem. This complex environment elevates the importance of addressing edge security and implementing strong end-to-end security from sensors to data centers in order to mitigate security threats. 

Implementing a Zero Trust edge (ZTE) policy for networks and devices powering edge solutions using a least-privileged approach to access control addresses these security issues.[i] ZTE solutions securely connect and transport traffic using Zero Trust access principles in and out of remote sites, leveraging mostly cloud-based security and networking services. These ZTE solutions protect businesses from customers, employees, contractors, and devices at remote sites connecting through WAN fabrics to more open, dangerous, and turbulent environments. When designing a system architecture that incorporates edge computing resources, technology stakeholders need to ensure that the architecture adheres to cybersecurity best practices and regulations that govern data wherever it is located. 

SUSE: Once cloud radio access network (RAN) becomes a reality, will operators be able to monetize the underlying edge infrastructure to run customer applications side by side? 

Michele: Cloud RAN can enhance network versatility and agility, accelerate introduction of new radio features, and enable shared infrastructure with other edge services, such as multiaccess edge computing or fixed-wireless access. In the future, new opportunities will extend use cases to transform business operations and industry-focused applications. Infrastructure sharing will help firms reduce costs, enhance service scalability, and facilitate portable applications. RAN and cloud native application development will extend private 5G in enterprise and industrial environments by reducing latency from the telco edge to the device edge. Enabling compute functions closer to the data will power AI and machine-learning insights to build smarter infrastructure, smarter industry, and smarter city environments. Sharing insights and innovations through open source communities will facilitate evolving innovation in cloud RAN deployments and emerging applications that leverage new hardware features and cloud native design principles.
 

What’s next? 

Register and watch the “Find Value at the Edge: Innovation Opportunities and Use Cases” Webinar today! Also, get a complimentary copy of the Forrester report: The Future of Edge Computing.  

 

Harvester 1.1.0: The Latest Hyperconverged Infrastructure Solution

Wednesday, 26 October, 2022

The Harvester team is pleased to announce the next release of our open source hyperconverged infrastructure product. For those unfamiliar with how Harvester works, I invite you to check out this blog from our 1.0 launch that explains it further. This next version of Harvester adds several new and important features to help our users get more value out of Harvester. It reflects the efforts of many people, both at SUSE and in the open source community, who have contributed to the product thus far. Let’s dive into some of the key features.  

GPU and PCI device pass-through 

The GPU and PCI device pass-through experimental features are some of the most requested features this year and are officially live. These features enable Harvester users to run applications in VMs that need to take advantage of PCI devices on the physical host. Most notably, GPUs are an ever-increasing use case to support the growing demand for Machine Learning, Artificial Intelligence and analytics workloads. Our users have learned that both container and VM workloads need to access GPUs to power their businesses. This feature also can support a variety of other use cases that need PCI; for instance, SR-IOV-enabled Network Interface Cards can expose virtual functions as PCI devices, which Harvester can then attach to VMs. In the future, we plan to extend this function to support advanced forms of device passthrough, such as vGPU technologies.  

VM Import Operator  

Many Harvester users maintain other HCI solutions with a various array of VM workloads. And for some of these use cases, they want to migrate these VMs to Harvester. To make this process easier, we created the VM Import Operator, which automates the migration of VMs from existing HCI to Harvester. It currently supports two popular flavors: OpenStack and VMware vSphere. The operator will connect to either of those systems and copy the virtual disk data for each VM to Harvester’s datastore. Then it will translate the metadata that configures the VM to the comparable settings in Harvester.   

Storage network 

Harvester runs on various hardware profiles, some clusters being more compute-optimized and others optimized for storage performance. In the case of workloads needing high-performance storage, one way to increase efficiency is to dedicate a network to storage replication. For this reason, we created the Storage Network feature. A dedicated storage network removes I/O contention between workload traffic (pod-to-pod communication, VM-to-VM, etc.) and the storage traffic, which is latency sensitive. Additionally, higher capacity network interfaces can be procured for storage, such as 40 or 100 GB Ethernet.  

Storage tiering  

When supporting workloads requiring different types of storage, it is important to be able to define classes or tiers of storage that a user can choose from when provisioning a VM. Tiers can be labeled with convenient terms such as “fast” or “archival” to make them user-friendly. In turn, the administrator can then map those storage tiers to specific disks on the bare metal system. Both node and disk label selectors define the mapping, so a user can specify a unique combination of nodes and disks on those nodes that should be used to back a storage tier. Some of our Harvester users want to use this feature to utilize slower magnetic storage technologies for parts of the application where IOPS is not a concern and low-cost storage is preferred.

In summary, the past year has been an important chapter in the evolution of Harvester. As we look to the future, we expect to see more features and enhancements in store. Harvester plans to have two feature releases next year, allowing for a more rapid iteration of the ideas in our roadmap. You can download the latest version of Harvester on Github. Please continue to share your feedback with us through our community slack or your SUSE account representative.  

Learn more

Download our FREE eBook6 Reasons Why Harvester Accelerates IT Modernization Initiatives. This eBook identifies the top drivers of IT modernization, outlines an IT modernization framework and introduces Harvester, an open, interoperable hyperconverged infrastructure (HCI) solution.