Using Hyperconverged Infrastructure for Kubernetes

Tuesday, 7 February, 2023

Companies face multiple challenges when migrating their applications and services to the cloud, and one of them is infrastructure management.

The ideal scenario would be that all workloads could be containerized. In that case, the organization could use a Kubernetes-based service, like Amazon Web Services (AWS), Google Cloud or Azure, to deploy and manage applications, services and storage in a cloud native environment.

Unfortunately, this scenario isn’t always possible. Some legacy applications are either very difficult or very expensive to migrate to a microservices architecture, so running them on virtual machines (VMs) is often the best solution.

Considering the current trend of adopting multicloud and hybrid environments, managing additional infrastructure just for VMs is not optimal. This is where a hyperconverged infrastructure (HCI) can help. Simply put, HCI enables organizations to quickly deploy, manage and scale their workloads by virtualizing all the components that make up the on-premises infrastructure.

That being said, not all HCI solutions are created equal. In this article, you’ll learn more about what an HCI is and then explore Harvester, an enterprise-grade HCI software that offers you unique flexibility and convenience when managing your infrastructure.

What is HCI?

Hyperconverged infrastructure (HCI) is a type of data center infrastructure that virtualizes computing, storage and networking elements in a single system through a hypervisor.

Since virtualized abstractions managed by a hypervisor replaces all physical hardware components (computing, storage and networking), an HCI offers benefits, including the following:

  • Easier configuration, deployment and management of workloads.
  • Convenience since software-defined data centers (SDDCs) can also be easily deployed.
  • Greater scalability with the integration of more nodes to the HCI.
  • Tight integration of virtualized components, resulting in fewer inefficiencies and lower total cost of ownership (TCO).

However, the ease of management and the lower TCO of an HCI approach come with some drawbacks, including the following:

  • Risk of vendor lock-in when using closed-source HCI platforms.
  • Most HCI solutions force all resources to be increased in order to increase any single resource. That is, new nodes add more computing, storage and networking resources to the infrastructure.
  • You can’t combine HCI nodes from different vendors, which aggravates the risk of vendor lock-in described previously.

Now that you know what HCI is, it’s time to learn more about Harvester and how it can alleviate the limitations of HCI.

What is Harvester?

According to the Harvester website, “Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Kubernetes, KubeVirt and Longhorn.” Harvester is an ideal solution for those seeking a Cloud native HCI offering — one that is both cost-effective and able to place VM workloads on the edge, driving IoT integration into cloud infrastructure.

Because Harvester is open source, this automatically means you don’t have to worry about vendor lock-in. Furthermore, since it’s built on top of Kubernetes, Harvester offers incredible scalability, flexibility and reliability.

Additionally, Harvester provides a comprehensive set of features and capabilities that make it the ideal solution for deploying and managing enterprise applications and services. Among these characteristics, the following stand out:

  • Built on top of Kubernetes.
  • Full VM lifecycle management, thanks to KubeVirt.
  • Support for VM cloud-init templates.
  • VM live migration support.
  • VM backup, snapshot and restore capabilities.
  • Distributed block storage and storage tiering, thanks to Longhorn.
  • Powerful monitoring and logging since Harvester uses Grafana and Prometheus as its observability backend.
  • Seamless integration with Rancher, facilitating multicluster deployments as well as deploying and managing VMs and Kubernetes workloads from a centralized dashboard.

Harvester architectural diagram courtesy of Damaso Sanoja

Now that you know about some of Harvester’s basic features, let’s take a more in-depth look at some of the more prominent features.

How Rancher and Harvester can help with Kubernetes deployments on HCI

Managing multicluster and hybrid-cloud environments can be intimidating when you consider how complex it can be to monitor infrastructure, manage user permissions and avoid vendor lock-in, just to name a few challenges. In the following sections, you’ll see how Harvester, or more specifically, the synergy between Harvester and Rancher, can make life easier for ITOps and DevOps teams.

Straightforward installation

There is no one-size-fits-all approach to deploying an HCI solution. Some vendors sacrifice features in favor of ease of installation, while others require a complex installation process that includes setting up each HCI layer separately.

However, with Harvester, this is not the case. From the beginning, Harvester was built with ease of installation in mind without making any compromises in terms of scalability, reliability, features or manageability.

To do this, Harvester treats each node as an HCI appliance. This means that when you install Harvester on a bare-metal server, behind the scenes, what actually happens is that a simplified version of SLE Linux is installed, on top of which Kubernetes, KubeVirt, Longhorn, Multus and the other components that make up Harvester are installed and configured with minimal effort on your part. In fact, the manual installation process is no different from that of a modern Linux distribution, save for a few notable exceptions:

  • Installation mode: Early on in the installation process, you will need to choose between creating a new cluster (in which case the current node becomes the management node) or joining an existing Harvester cluster. This makes sense since you’re actually setting up a Kubernetes cluster.
  • Virtual IP: During the installation, you will also need to set an IP address from which you can access the main node of the cluster (or join other nodes to the cluster).
  • Cluster token: Finally, you should choose a cluster token that will be used to add new nodes to the cluster.

When it comes to installation media, you have two options for deploying Harvester:

It should be noted that, regardless of the deployment method, you can use a Harvester configuration file to provide various settings. This makes it even easier to automate the installation process and enforce the infrastructure as code (IaC) philosophy, which you’ll learn more about later on.

For your reference, the following is what a typical configuration file looks like (taken from the official documentation):

scheme_version: 1
server_url: https://cluster-VIP:443
token: TOKEN_VALUE
os:
  ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB...
    - github:username
  write_files:
  - encoding: ""
    content: test content
    owner: root
    path: /etc/test.txt
    permissions: '0755'
  hostname: myhost
  modules:
    - kvm
    - nvme
  sysctls:
    kernel.printk: "4 4 1 7"
    kernel.kptr_restrict: "1"
  dns_nameservers:
    - 8.8.8.8
    - 1.1.1.1
  ntp_servers:
    - 0.suse.pool.ntp.org
    - 1.suse.pool.ntp.org
  password: rancher
  environment:
    http_proxy: http://myserver
    https_proxy: http://myserver
  labels:
    topology.kubernetes.io/zone: zone1
    foo: bar
    mylabel: myvalue
install:
  mode: create
  management_interface:
    interfaces:
    - name: ens5
      hwAddr: "B8:CA:3A:6A:64:7C"
    method: dhcp
  force_efi: true
  device: /dev/vda
  silent: true
  iso_url: http://myserver/test.iso
  poweroff: true
  no_format: true
  debug: true
  tty: ttyS0
  vip: 10.10.0.19
  vip_hw_addr: 52:54:00:ec:0e:0b
  vip_mode: dhcp
  force_mbr: false
system_settings:
  auto-disk-provision-paths: ""

All in all, Harvester offers a straightforward installation on bare-metal servers. What’s more, out of the box, Harvester offers powerful capabilities, including a convenient host management dashboard (more on that later).

Host management

Nodes, or hosts, as they are called in Harvester, are the heart of any HCI infrastructure. As discussed, each host provides the computing, storage and networking resources used by the HCI cluster. In this sense, Harvester provides a modern UI that gives your team a quick overview of each host’s status, name, IP address, CPU usage, memory, disks and more. Additionally, your team can perform all kinds of routine operations intuitively just by right-clicking on each host’s hamburger menu:

  • Node maintenance: This is handy when your team needs to remove a node from the cluster for a long time for maintenance or replacement. Once the node enters the maintenance node, all VMs are automatically distributed across the rest of the active nodes. This eliminates the need to live migrate VMs separately.
  • Cordoning a node: When you cordon a node, it’s marked as “unschedulable,” which is useful for quick tasks like reboots and OS upgrades.
  • Deleting a node: This permanently removes the node from the cluster.
  • Multi-disk management: This allows adding additional disks to a node as well as assigning storage tags. The latter is useful to allow only certain nodes or disks to be used for storing Longhorn volume data.
  • KSMtuned mode management: In addition to the features described earlier, Harvester allows your team to tune the use of kernel same-page merging (KSM) as it deploys the KSM Tuning Service ksmtuned on each node as a DaemonSet.

To learn more on how to manage the run strategy and threshold coefficient of ksmtuned, as well as more details on the other host management features described, check out this documentation.

As you can see, managing nodes through the Harvester UI is really simple. However, your ops team will spend most of their time managing VMs, which you’ll learn more about next.

VM management

Harvester was designed with great emphasis on simplifying the management of VMs’ lifecycles. Thanks to this, IT teams can save valuable time when deploying, accessing and monitoring VMs. Following are some of the main features that your team can access from the Harvester Virtual Machines page.

Harvester basic VM management features

As you would expect, the Harvester UI facilitates basic operations, such as creating a VM (including creating Windows VMs), editing VMs and accessing VMs. It’s worth noting that in addition to the usual configuration parameters, such as VM name, disks, networks, CPU and memory, Harvester introduces the concept of the namespace. As you might guess, this additional level of abstraction is made possible by Harvester running on top of Kubernetes. In practical terms, this allows your Ops team to create isolated virtual environments (for example, development and production), which facilitate resource management and security.

Furthermore, Harvester also supports injecting custom cloud-init startup scripts into a VM, which speeds up the deployment of multiple VMs.

Harvester advanced VM management features

Today, any virtualization tool allows the basic management of VMs. In that sense, where enterprise-grade platforms like Harvester stand out from the rest is in their advanced features. These include performing VM backup, snapshot and restoredoing VM live migrationadding hot-plug volumes to running VMs; cloning VMs with volume data; and overcommitting CPU, memory and storage.

While all these features are important, Harvester’s ability to ensure the high availability (HA) of VMs is hands down the most crucial to any modern data center. This feature is available on Harvester clusters with three or more nodes and allows your team to migrate live VMs from one node to another when necessary.

Furthermore, not only is live VM migration useful for maintaining HA, but it is also a handy feature when performing node maintenance when a hardware failure occurs or your team detects a performance drop on one or more nodes. Regarding the latter, performance monitoring, Harvester provides out-of-the-box integration with Grafana and Prometheus.

Built-in monitoring

Prometheus and Grafana are two of the most popular open source observability tools today. They’re highly customizable, powerful and easy to use, making them ideal for monitoring key VMs and host metrics.

Grafana is a data-focused visualization tool that makes it easy to monitor your VM’s performance and health. It can provide near real-time performance metrics, such as CPU and memory usage and disk I/O. It also offers comprehensive dashboards and alerts that are highly configurable. This allows you to customize Grafana to your specific needs and create useful visualizations that can help you quickly identify issues.

Meanwhile, Prometheus is a monitoring and alerting toolkit designed for large-scale, distributed systems. It collects time series data from your VMs and hosts, allowing you to quickly and accurately track different performance metrics. Prometheus also provides alerts when certain conditions have been met, such as when a VM is running low on memory or disk space.

All in all, using Grafana and Prometheus together provide your team with comprehensive observability capabilities by means of detailed graphs and dashboards that can help them to identify why an issue is occurring. This can help you take corrective action more quickly and reduce the impact of any potential issues.

Infrastructure as Code

Infrastructure as code (IaC) has become increasingly important in many organizations because it allows for the automation of IT infrastructure, making it easier to manage and scale. By defining IT infrastructure as code, organizations can manage their VMs, disks and networks more efficiently while also making sure that their infrastructure remains in compliance with the organization’s policies.

With Harvester, users can define their VMs, disks and networks in YAML format, making it easier to manage and version control virtual infrastructure. Furthermore, thanks to the Harvester Terraform provider, DevOps teams can also deploy entire HCI clusters from scratch using IaC best practices.

This allows users to define the infrastructure declaratively, allowing operations teams to work with developer tools and methodologies, helping them become more agile and effective. In turn, this saves time and cost and also enables DevOps teams to deploy new environments or make changes to existing ones more efficiently.

Finally, since Harvester enforces IaC principles, organizations can make sure that their infrastructure remains compliant with security, regulatory and governance policies.

Rancher integration

Up to this point, you’ve learned about key aspects of Harvester, such as its ease of installation, its intuitive UI, its powerful built-in monitoring capabilities and its convenient automation, thanks to IaC support. However, the feature that takes Harvester to the next level is its integration with Rancher, the leading container management tool.

Harvester integration with Rancher allows DevOps teams to manage VMs and Kubernetes workloads from a single control panel. Simply put, Rancher integration enables your organization to combine conventional and Cloud native infrastructure use cases, making it easier to deploy and manage multi-cloud and hybrid environments.

Furthermore, Harvester’s tight integration with Rancher allows your organization to streamline user and system management, allowing for more efficient infrastructure operations. Additionally, user access control can be centralized in order to ensure that the system and its components are protected.

Rancher integration also allows for faster deployment times for applications and services, as well as more efficient monitoring and logging of system activities from a single control plane. This allows DevOps teams to quickly identify and address issues related to system performance, as well as easily detect any security risks.

Overall, Harvester integration with Rancher provides DevOps teams with a comprehensive, centralized system for managing both VMs and containerized workloads. In addition, this approach provides teams with improved convenience, observability and security, making it an ideal solution for DevOps teams looking to optimize their infrastructure operations.

Conclusion

One of the biggest challenges facing companies today is migrating their applications and services to the cloud. In this article, you’ve learned how you can manage Kubernetes and VM-based environments with the aid of Harvester and Rancher, thus facilitating your application modernization journey from monolithic apps to microservices.

Both Rancher and Harvester are part of the rich SUSE ecosystem that helps your business deploy multi-cloud and hybrid-cloud environments easily across any infrastructure. Harvester is an open source HCI solution. Try it for free today.

SUSE’s Rachel Cassidy Honored as a 2023 CRN Channel Chief

Monday, 6 February, 2023

CRN®, a brand of The Channel Company, has recognized Rachel Cassidy, SUSE’s SVP Global Channel & Alliances on its 2023 Channel Chiefs list 

 About CRN’s 2023 Channel Chiefs List 

 Every year, CRN honors the IT channel executives who work tirelessly to advance the channel agenda and deliver successful channel partner programs and strategies through the Channel Chiefs list. 

 The 2023 Channel Chiefs have helped their solution provider partners and customers navigate an increasingly complex landscape of interconnected challenges and shifting industry dynamics. With the innovative strategies, programs, and partnerships of these Channel Chiefs in place, the solution provider community has continued to thrive. 

 The 2023 CRN Channel Chiefs were selected by the editorial staff based on their record of business innovation and dedication to the partner community. This year’s list represents the top IT executives responsible for building a robust channel ecosystem. 

 This isn’t the first year that Rachel has been recognized, having also made the list in in 2022, 2021, 2020 and 2015. 

 Partners are Important to SUSE & Customer Innovation 

 Rachel explains how our partners help us in our quest to allow our customers to “Innovate Everywhere”, “I believe in the power and impact of the “partner trifecta”, where multiple partners provide an integrated solution fostering innovation working across the ecosystem to build solutions that meet customer needs. Partners working together not only fosters innovation but creates powerful solutions that far outweigh what any one vendor can do on their own.” 

 This CRN recognition also demonstrates SUSE’s continued focus and investment in our partner ecosystem.  

 In the past few years, we’ve listened to our partners’ feedback and improved and overhauled the: 

  • Partner Program, launching the current iteration of the SUSE One Partner Program, which provides partners with the flexibility to participate in one of six specializations that cater to different partner business models 
  • Deal registration to make it simpler and easier for partners to claim and benefit 
  • Partner portal, the partner gateway to key SUSE assets and other web properties 

 We also recognize that SUSE partners are some of the most loyal and committed out there. That’s why we launched a partner loyalty platform, SUSE Champions, where partners can challenge themselves and earn points, while also serving as a spot for partners to make those all important connections with each other. 

 Watch this space, as we continue to evolve the SUSE One Partner Program and offerings with our partners! 

 

 The 2023 CRN Channel Chiefs list will be featured in the February 2023 issue of CRN Magazine. In the meantime, you can read Rachel’s entry

 

Migrating to SAP S/4HANA: Why a Linux Platform is Integral to Success

Saturday, 4 February, 2023

Savings, new business models, and competitive advantage are a few of the benefits offered by SAP S/4HANA. See why SUSE Linux Enterprise Server for SAP Applications is a leading platform for SAP solutions on Linux technology, which is required for harnessing SAP S/4HANA.

According to a recent survey, two-thirds of SAP customers are planning their company’s future technology path with SAP S/4HANA. It’s no secret why: Running on the in-memory power of SAP HANA, SAP S/4HANA offers a simplified business suite that takes advantage of a reworked underlying table architecture that can boost data use and access. And with a “Google-like” search experience, data is much more accessible, providing for a richer analytics experience.

It’s not just about ease of use. The difference is quantifiable: One estimate indicates that organizations can save 37% across hardware, software, and labor costs. These savings, paired with the new capabilities, can lead companies to new business models and competitive advantage.

But to harness SAP S/4HANA, the underlying operating system for SAP HANA must provide enterprise-level reliability and availability, be easy to manage and scale into the future, and have lock-down security.

Reliability, Availability, and Security with Linux

SAP S/4HANA requires Linux technology, and SUSE Linux Enterprise Server for SAP applications offers a number of features that explain why it is a leading platform for SAP solutions on Linux:

High availability with automated failover: Since 2011, SUSE has worked with SAP to improve the scalability and high availability of SAP HANA so that companies can grow their deployments to include multiple nodes for system replication and application failover across multiple geographic locations.

OS security: Hackers often attack the OS and not the database directly, so security of the underlying OS is just as important as the security of the database. SUSE has focused for a long time on IT security and has an aggressive international security certification program. In addition, SUSE offers an integrated antivirus solution for SAP environments.

Simple management: SUSE Linux Enterprise Server for SAP applications includes the agents to be managed by SUSE Manager, which enables the efficient management of Linux systems. SUSE Manager helps to package and update sources, which are organized as repositories.

Uptime and performance: SUSE Linux Enterprise Server for SAP applications consistently provides outstanding uptime and performance, even under full CPU loads and high memory stress.

Options in the cloud: SUSE Linux Enterprise Server for SAP applications is based on SUSE Linux Enterprise Server, a Linux platform that is proven in the cloud and selected for use with SAP cloud solutions such as SAP HANA Enterprise Cloud and SAP Cloud Platform. Furthermore, SUSE has a strong relationship with other virtualization and cloud market companies, such as VMware, Microsoft, Amazon, and Google, and works with a broad global ecosystem to ensure that companies have access to SUSE Linux Enterprise Server in the public cloud of their choice. Whatever path to the cloud that organizations are taking, Linux supports any data center plan that companies could have for their SAP systems.

Learn More

Since SAP S/4HANA is the future for SAP customers, companies need to make sure their Linux platform is in place — and highly functioning. For any deployment scenario, SUSE Linux Enterprise Server for SAP applications represents a choice that can support companies well into the future as they seek to gain benefits from SAP S/4HANA. For more information, visit here.

How To Simplify Your Kubernetes Adoption Using Rancher

Wednesday, 1 February, 2023

Kubernetes has firmly established itself as the leading choice for container orchestration thanks to its robust ecosystem and flexibility, allowing users to scale their workloads easily. However, the complexity of Kubernetes can make it challenging to set up and may pose a significant barrier for organizations looking to adopt cloud native technology and containers as part of their modernization efforts.
 

In this blog post, we’ll look at how Rancher can help infrastructure operators simplify the process of adopting Kubernetes into their ecosystem. We’ll explore how Rancher provides a range of features and tools that make it easier to deploy, manage, and secure containerized applications and Kubernetes clusters.
 

Let’s start analyzing the main challenges for Kubernetes adoption and how Rancher tackles them.   

Challenge #1: Kubernetes is Complex 

One of the main challenges of adopting Kubernetes is the learning curve required to understand the orchestration platform and its implementation. Kubernetes has a large and complex codebase with many moving parts and a rapidly growing ecosystem. This can make it difficult for organizations to get up and running confidently, as these issues can blur the decisions required to determine the needed resources. Kubernetes talent remains difficult to source. Organizations with a preference for in-house, dedicated support may struggle to fill roles and scale the business growth at the speed they wish.
 

Utilizing a Kubernetes Management Platform (KMP) like Rancher can help alleviate some of these resourcing roadblocks by simplifying Kubernetes management and operations. Rancher’s provides a user-friendly web interface for managing Kubernetes clusters and applications, which can be used by developers and operations teams alike, and encourages domain specialists to upskill and transfer knowledge across teams.
 

Rancher also includes graphical cluster management, application templates, and one-click deployments, making it easier to deploy and manage applications hosted on Kubernetes and encouraging teams to utilize templatized processes to avoid over-complicating deployments. Rancher also has several built-in tools and integrations, such as monitoring, logging, and alerting, which can help teams get insights into their Kubernetes deployments faster.   

Challenge #2: Lack of Integration with Existing Tools and Workflows   

Another challenge of adopting Kubernetes is integrating an organization’s existing tools and workflows. Many teams already have various tools and processes to manage their applications and infrastructure, and introducing a new platform like Kubernetes can often disrupt these established processes.  

However, choosing a KMP like Rancher, which out-of-the-box integrates with multiple tools and platforms, from cloud providers to container registries, and continuous integration/continuous deployment (CI/CD) tools, enables organizations to adopt and implement Kubernetes alongside their existing stack. 

Challenge #3: Security is Now Top of Mind   

As more enterprises transition their stack to cloud native, security across Kubernetes environments has become top of mind for them. Kubernetes includes built-in basic security features, such as role-based access control (RBAC) and Pod Security Admission. However, learning to configure these features in addition to your stack’s existing security levels can be a maze at best and potentially expose weaknesses in your environment. Given Kubernetes’ dynamic nature, identifying, analyzing, and mitigating security incidents without the proper tools is a big challenge. 

 Rancher includes several protective features and integrations with security solutions to help organizations fortify their Kubernetes clusters and deployments. These include out-of-the-box support for RBAC, Authentication Proxy, CIS and vulnerability scanning, amongst others.  

 Rancher also provides integration with security-focused solutions, including SUSE NeuVector and Kubewarden.  

 

SUSE Neuvector provides comprehensive container security throughout the entire lifecycle, from development to production. It scans container registries and images and uses behavioral-based zero-trust security policies and advanced Deep Packet Inspection technology to prevent attacks from spreading or reaching the applications at the network level. This enables teams to implement zero-trust practices across their container environments easily. 

 

Kubewarden is a CNCF incubating project that delivers policy-as-code. Leveraging the power of WASM, Kubewarden allows writing security policies in your language of choice (Rego, Rust, Go, Swift, …) and controls policies not just during deployment but also handling mutations and runtime modifications.  

 

Both solutions help users build a better-fortified Kubernetes environment whilst minimizing the operational overhead needed to maintain a productive environment.   

Rancher’s out-of-the-box monitoring and auditing capabilities for Kubernetes clusters and applications help organizations get real-time data to identify and address any potential security issues quickly, reducing operational downtime and preventing substantial impact on an organization’s bottom line.  

In addition to all the products and features, it is crucial to secure and harden our environments properly. Rancher has undergone the DISA certification process for its multi-cluster management solution and the RKE2 Kubernetes distributions, making them the only solutions currently certified in this space. As a result, you can use the DISA-approved STIG guides for Rancher and RKE2 to implement a customized hardening approach for your specific use case.  

Challenge #4: Management and Automation   

As the number of clusters and containerized applications grows, the complexity of automating, configuring, and securing the environments skyrockets. As more organizations choose to modernize with Kubernetes, the reliance on automation, compliance and security of deployments is becoming more critical. Teams need solutions that can help their organization scale safely.
 

Rancher includes Fleet, a continuous delivery tool that helps your organization implement GitOps practices. The benefits of using GitOps in Kubernetes include the following:  

  1. Version Control: Git provides a way to track and manage changes to the cluster’s desired state, making it easy to roll back or revert changes.  
  2. Encourages Collaboration: Git makes it easy for multiple team members to work on the same cluster configuration and review and approve changes before deployment.  
  3. Utilize Automation: By using Git as the source of truth, changes can be automatically propagated to the cluster, reducing the risk of human error.  
  4. Improve Visibility: Git provides an auditable history of changes to the cluster, making it easy to see who made changes, when, and why.   

Conclusion: 

Adopting Kubernetes doesn’t have to be hard. Finding reliable solutions like Rancher can help teams better manage their clusters and applications on Kubernetes. KMP platforms help reduce the entry barrier to adopting Kubernetes and help ease the transition from traditional IT to cloud native architectures. 
 

For Kubernetes users who need additional support and services, there is Rancher Prime – the complete product and support subscription package of Rancher. Enterprises adopting Kubernetes and utilizing Rancher Prime have seen substantial economic benefits, which you can learn more about in Forrester’s ‘Total Economic Impact’ Report on Rancher Prime. 

Container Security: Network Visibility 

Wednesday, 1 February, 2023

Network Inspection + Container Firewall for unmatched visibility 

You can’t secure what you can’t see. Deep network visibility is the most critical part of runtime container security. In traditional perimeter-based security, administrators deploy firewalls to quarantine or block attacks before they reach the workload. Inspecting container network traffic reveals how an application communicates with other applications and it’s the only place to stop attacks before they reach the application or workload. It’s also the last chance to prevent data breaches by exploited applications which send data out over the network. Proper network controls will limit the ‘blast radius’ of an attack. 

NeuVector enables you to see all the traffic on your network. 

 



NeuVector goes beyond static diagrams which are based on the inspection of the deployment manifests of container services and the open ports or syscalls during run-time, to deliver real-time analysis of true network traffic that is being filtered and inspected, rather than trying to guess network connections. NeuVector’s patented technology is the only solution to deliver production-grade container security that enables security teams to: 

  • Perform Deep Packet Inspection (DPI):NeuVector applies DPI to identify attacks, detect sensitive data, or verify application access to further reduce the attack surface. Only network layer analysis enables security to detect and verify the allowed protocols, helping security teams enforce business policy. 
  • Deliver real-time protection with the industry’s only Container FirewallNeuVector’s container firewall provides inspection, segmentation, and protection of all traffic into and out of a container. This includes container to container traffic as well as ingress from external sources to containers, and egress from containers to external applications and the internet. Our Layer 7 container firewall protects your applications from internal application level attacks such as DDoS and DNS. 
  • Monitor ‘East-west’ and ‘North-south’ container trafficMicroservices and containers dramatically increase internal East-West traffic in a data center. Without application-aware container network security, an attacker can exploit containers once inside a data center. NeuVector detects and displays real-time connection info for all container traffic, internal, ingress and egress. 
  • Capture Packets for Debugging and Threat InvestigationNeuVector makes it easy to view summary connection data and drill down into actual packet details for each container, even as they scale up and down. When a threat is detected, NeuVector will automatically capture and display the packet info, making it easy to investigate. 

NeuVector: Full Lifecycle Cloud Container Security Platform

NeuVector is the only 100% open source, Zero Trust container security platform. Continuously scan throughout the container lifecycle,  remove security roadblocks, & bake in security policies at the start to maximize developer agility. Get started on kubernetes security by getting NeuVector on GitHub.

Kubernetes Security: Container Segmentation

Wednesday, 1 February, 2023

Essential for PCI compliance and many financial organizations, NeuVector’s container segmentation capability creates a virtual wall to keep personal and private information securely isolated on your network.

Container segmentation, also called micro-segmentation or nano-segmentation, is often required because containers contain personal or private information about customers or employees or other critical business data. Without segmentation, this information could be exposed to anyone with access to the network because containers are often deployed as microservices which can be dynamically deployed and scaled across a Kubernetes cluster.

Typically, because different services can be deployed across a shared network and servers (or VMs, hosts), and each workload or pod has its own network addressable IP address, container segmentation policies can be difficult to create and enforce. Only NeuVector enables you to segment container connections and enforce network restrictions to prevent attacks that span an entire cluster or an entire container deployment across clouds. NeuVector offers virtualized network segmentation that is aligned tightly with cloud-native container services deployments as shown below.

 

 

With NeuVector, organizations receive:

  • Multi-vector threat protection with the combination of network security, application security, endpoint security, and host security.
  • Superior threat detection: NeuVector’s container firewall detects threats such as SQL injection, DDoS, DNS attacks and other application layer attacks by inspecting the payload even for trusted connections.
  • Service mesh integration: threat detection and segmentation even if the connection between two pods is encrypted.
  • Automated network segmentation: NeuVector’s patented, cloud-native Layer 7 container firewall uses behavioral learning to discover the connections and application protocols used between services and automatically creates whitelist rules to isolate them.
  • Flexibility to segment hybrid workloads: architects and DevOps teams can maximize performance, resource utilization, and speed up the pipeline with the ability to mix workloads of different required trust levels on the same infrastructure.

NeuVector’s container segmentation capabililty improves scalability, manageability, and flexibility for deployments without needing to change security rules. Layer 7 deep packet inspection allows the container firewall to inspect network traffic for hidden, or embedded attacks, even within trusted connections between workloads.

Download container segmentation guide

 

NeuVector: Full Lifecycle Cloud Container Security Platform

NeuVector is the only 100% open source, Zero Trust container security platform. Continuously scan throughout the container lifecycle,  remove security roadblocks, & bake in security policies at the start to maximize developer agility. Get started on kubernetes security by getting NeuVector on GitHub.

7 Reasons the OS matters for Digital Transformation with SAP S/4HANA

Wednesday, 1 February, 2023

SUSE can drive your SAP transformation

There’s a digital transformation going on in the industry to meet consumer demands for instant access to data and services. Change is often disruptive and uncomfortable, but SAP is doing their part to help organizations simplify the business data and analytics landscape. And as usual we at SUSE are working right alongside SAP to drive this digital transformation and meet customer demands for less operational complexity. So in this brave new world of a single operating system for the SAP infrastructure, let me give you 7 reasons why the recently announced SUSE Linux Enterprise Server for SAP Applications is ideal for SAP HANA and S/4HANA.

  1. S/4HANA is coming. If you’re a fan of Game of Thrones you know that the words “winter is coming” strikes fear in the hearts of the inhabitants of the seven kingdoms. With S/4HANA there is nothing to fear. Life in the datacenter actually gets easier with one user interface (SAP Fiori), one database (SAP HANA) and one operating environment (Linux) for all SAP business operations applications. If you need to start your digital transformation and you’re still running your SAP environment on Windows Server or UNIX, now is the time to start planning when to migrate to Linux. SUSE Linux Enterprise Server continues to be a reference development platform for SAP applications on Linux, and the additional features we add with SUSE Linux Enterprise Server for SAP Applications make the transition easier and enterprise-ready.
  2. Reliability is built-in. SUSE Linux Enterprise Server for SAP Applications includes high-availability fail-over so that your business units can keep working following a system failure, and disaster recovery to minimize downtime after a catastrophic event. We give you 5 different options for automated fail-over of SAP HANA systems supporting scale-up or scale-out, performance- or cost-optimized recovery, plus options for remote standby and multi-tenancy. We also help to reduce downtime by protecting your SAP HANA data with enhanced encryption management for remote storage systems and a firewall for the SAP HANA system. You can also add a subscription for SUSE Live Patching to eliminate downtime when fixing security vulnerabilities in the Linux kernel.
  3. Simplicity saves money. There are many ways to reduce costs when transforming your operations to the SAP S/4HANA architecture. First of all, SUSE Linux Enterprise subscription options are often less costly than Windows and UNIX licensing and support. Second, your admins don’t have to be certified to install and maintain multiple operating environments across multiple hardware platforms. And third … well I think one of our customers, Dr. Vineet Bansal the CIO of Greenply Industries Ltd. said it best: “Our SUSE operating system is easy to manage and gives us the freedom to choose the best-value hardware, rather than being tied in to a particular vendor as we were previously. We also have the cost benefits of not needing to license a separate high-availability solution.”
  4. Deployment is faster. SUSE Linux Enterprise Server for SAP Applications has more wizards than a Harry Potter novel! There’s a wizard that uses an SAP application-specific configuration package call “sapconf” to not only configure the OS, but also to save you hours of reading SAP manuals to optimize the SAP applications. There’s a wizard to configure the SAP HANA firewall, assuming of course that you don’t want go with the automatic configuration. The built-in SUSE Linux Enterprise High Availability extension includes a wizard called “HAWK” that makes it faster to set up the HA environment of your choice. Trento automatically reviews your SAP environments and offers insights and best practices to avoid misconfigurations. All of this magic can cut your set-up time from days to hours.
  5. Easier Windows migration. If your SAP environment is currently running on UNIX, then learning Linux for S/4HANA landscapes will be pretty easy for your sys admins. But if you’re running SAP NetWeaver apps on Microsoft Windows Server, then Linux will take some getting used to. In addition to training courses, we’ve made it easier for admins who are used to Windows by adding a familiar working environment based on Microsoft Remote Desktop Protocol. We even include a “cheat sheet” showing how to perform common Windows Server tasks with SUSE Linux. We added Enhanced Active Directory integration so that login and authentication that already exists for the Windows environment doesn’t have to be re-created for the Linux environment.
  6. Cloud deployment options. The need for faster time to market, limited skills and cost savings are just a few reasons why cloud deployment might be a better option for your operations. SUSE supports both public and private cloud options for deploying SAP environments. SUSE Linux Enterprise Server for SAP Applications was selected for SAP Cloud solutions like SAP HANA Enterprise Cloud, SAP HANA One, SAP HANA Cloud Platform and SAP-certified Amazon EC2 instances. You can also build your own private cloud for SAP deployments using SUSE OpenStack Cloud.
  7. 24/7 Priority Support. Last, but certainly not least important is the peace of mind you get knowing that you can quickly resolve your SAP system problems with 24 x 7 Priority Support from SAP and SUSE. Just submit your ticket to SAP as you always do, and if there’s a problem with the SUSE Linux operating system we will work directly with SAP to help you get it fixed.

Whenever you’re ready to start your transformation to a digital business, SUSE Linux Enterprise Server for SAP Applications provides an ideal foundation for today’s SAP HANA and NetWeaver landscapes, while easing the transition to S/4HANA.

Challenges and Solutions with Cloud Native Persistent Storage

Wednesday, 18 January, 2023

Persistent storage is essential for any account-driven website. However, in Kubernetes, most resources are ephemeral and unsuitable for keeping data long-term. Regular storage is tied to the container and has a finite life span. Persistent storage has to be separately provisioned and managed.

Making permanent storage work with temporary resources brings challenges that you need to solve if you want to get the most out of your Kubernetes deployments.

In this article, you’ll learn about what’s involved in setting up persistent storage in a cloud native environment. You’ll also see how tools like Longhorn and Rancher can enhance your capabilities, letting you take full control of your resources.

Persistent storage in Kubernetes: challenges and solutions

Kubernetes has become the go-to solution for containers, allowing you to easily deploy scalable sites with a high degree of fault tolerance. In addition, there are many tools to help enhance Kubernetes, including Longhorn and Rancher.

Longhorn is a lightweight block storage system that you can use to provide persistent storage to Kubernetes clusters. Rancher is a container management tool that helps you with the challenges that come with running multiple containers.

You can use Rancher and Longhorn together with Kubernetes to take advantage of both of their feature sets. This gives you reliable persistent storage and better container management tools.

How Kubernetes handles persistent storage

In Kubernetes, files only last as long as the container, and they’re lost if the container crashes. That’s a problem when you need to store data long-term. You can’t afford to lose everything when the container disappears.

Persistent Volumes are the solution to these issues. You can provision them separately from the containers they use and then attach them to containers using a PersistentVolumeClaim, which allows applications to access the storage:

Diagram showing the relationship between container application, its own storage and persistent storage courtesy of James Konik

However, managing how these volumes interact with containers and setting them up to provide the combination of security, performance and scalability you need bring further issues.

Next, you’ll take a look at those issues and how you can solve them.

Security

With storage, security is always a key concern. It’s especially important with persistent storage, which is used for user data and other critical information. You need to make sure the data is only available to those that need to see it and that there’s no other way to access it.

There are a few things you can do to improve security:

Use RBAC to limit access to storage resources

Role-based access control (RBAC) lets you manage permissions easily, granting users permissions according to their role. With it, you can specify exactly who can access storage resources.

Kubernetes provides RBAC management and allows you to assign both Roles, which apply to a specific namespace, and ClusterRoles, which are not namespaced and can be used to give permissions on a cluster-wide basis.

Tools like Rancher also include RBAC support. Rancher’s system is built on top of Kubernetes RBAC, which it uses for enforcement.

With RBAC in place, not only can you control who accesses what, but you can change it easily, too. That’s particularly useful for enterprise software managers who need to manage hundreds of accounts at once. RBAC allows them to control access to your storage layer, defining what is allowed and changing those rules quickly on a role-by-role level.

Use namespaces

Namespaces in Kubernetes allow you to create groups of resources. You can then set up different access control rules and apply them independently to each namespace, giving you extra security.

If you have multiple teams, it’s a good way to stop them from getting in each other’s way. It also keeps its resources private to their namespace.

Namespaces do provide a layer of basic security, compartmentalizing teams and preventing users from accessing what you don’t want them to.

However, from a security perspective, namespaces do have limitations. For example, they don’t actually isolate all the shared resources that the namespaced resources use. That means if an attacker gets escalated privileges, they can access resources on other namespaces served by the same node.

Scalability and performance

Delivering your content quickly provides a better user experience, and maintaining that quality as your traffic increases and decreases adds an additional challenge. There are several techniques to help your apps cope:

Use storage classes for added control

Kubernetes storage classes let you define how your storage is used, and there are various settings you can change. For example, you can choose to make classes expandable. That way, you can get more space if you run out without having to provision a new volume.

Longhorn has its own storage classes to help you control when Persistent Volumes and their containers are created and matched.

Storage classes let you define the relationship between your storage and other resources, and they are an essential way to control your architecture.

Dynamically provision new persistent storage for workloads

It isn’t always clear how much storage a resource will need. Provisioning dynamically, based on that need, allows you to limit what you create to what is required.

You can have your storage wait until a container that uses it is created before it’s provisioned, which avoids the wasted overhead of creating storage that is never used.

Using Rancher with Longhorn’s storage classes lets you provision storage dynamically without having to rely on cloud services.

Optimize storage based on use

Persistent storage volumes have various properties. Their size is an obvious one, but latency and CPU resources also matter.

When creating persistent storage, make sure that the parameters used reflect what you need to use it for. A service that needs to respond quickly, such as a login service, can be optimized for speed.

Using different storage classes for different purposes is easier when using a provider like Longhorn. Longhorn storage classes can specify different disk technologies, such as NVME, SSD, or rotation, and these can be linked to specific nodes allowing you to match storage to your requirements closely.

Stability

Building a stable product means getting the infrastructure right and aggressively looking for errors. That way, your product quality will be as high as possible.

Maximize availability

Outages cost time and money, so avoiding them is an obvious goal.

When they do occur, planning for them is essential. With cloud storage, you can automate reprovisioning of failed volumes to minimize user disruption.

To prevent data loss, you must ensure dynamically provisioned volumes aren’t automatically deleted when a resource is done with them. Kubernetes enables the use protection on volumes, so they aren’t immediately lost.

You can control the behavior of storage volumes by setting the reclaim policy. Picking the retain option lets you manually choose what to do with the data and prevents it from being deleted automatically.

Monitor metrics

As well as challenges, working with cloud volumes also offers advantages. Cloud providers typically include many strong options for monitoring volumes, facilitating a high level of observability.

Rancher makes it easier to monitor Kubernetes clusters. Its built-in Grafana dashboards let you view data for all your resources.

Rancher collects memory and CPU data by default, and you can break this data down by workload using PromQL queries.

For example, if you wanted to know how much data was being read to a disk by a workload, you’d use the following PromQL from Rancher’s documentation:


sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)

Longhorn also offers a detailed selection of metrics for monitoring nodes, volumes, and instances. You can also check on the resource usage of your manager, along with the size and status of backups.

The observability these metrics provide has several uses. You should log any detected errors in as much detail as possible, enabling you to identify and solve problems. You should also monitor performance, perhaps setting alerts if it drops below any particular threshold. The same goes for error logging, which can help you spot issues and resolve them before they become too serious.

Get the infrastructure right for large products

For enterprise-grade products that require fast, reliable distributed block storage, Longhorn is ideal. It provides a highly resilient storage infrastructure. It has features like application-aware snapshots and backups as well as remote replication, meaning you can protect your data at scale.

Longhorn provides enterprise-grade distributed block storage and facilitates deploying a highly resilient storage infrastructure. It lets you provision storage on the major cloud providers, with built-in support for AzureGoogle Cloud Platform (GCP) and Amazon Web Services (AWS).

Longhorn also lets you spread your storage over multiple availability zones (AZs). However, keep in mind that there can be latency issues if volume replicas reside in different regions.

Conclusion

Managing persistent storage is a key challenge when setting up Kubernetes applications. Because Persistent Volumes work differently from regular containers, you need to think carefully about how they interact; how you set things up impacts your application performance, security and scalability.

With the right software, these issues become much easier to handle. With help from tools like Longhorn and Rancher, you can solve many of the problems discussed here. That way, your applications benefit from Kubernetes while letting you keep a permanent data store your other containers can interact with.

SUSE is an open source software company responsible for leading cloud solutions like Rancher and Longhorn. Longhorn is an easy, fast and reliable Cloud native distributed storage platform. Rancher lets you manage your Kubernetes clusters to ensure consistency and security. Together, these and other products are perfect for delivering business-critical solutions.

SUSE Receives 15 Badges in the Winter G2 Report Across its Product Portfolio

Thursday, 12 January, 2023

 

 

 

 

I’m pleased to share that G2, the world’s largest and most trusted tech marketplace, has recognized our solutions in its 2023 Winter Report. We received a total of 15 badges across our business units for Rancher, SUSE Linux Enterprise Server (SLES), SLE Desktop and SLE Real Time – including the Users Love Us badge for all products – as well as three badges for the openSUSE community with Leap and Tumbleweed.

We recently celebrated 30 years of service to our customers, partners and the open source communities and it’s wonderful to keep the celebrations going with this recognition by our peers. Receiving 15 badges this quarter reinforces the depth and breadth of our strong product portfolio as well as the dedication that our team provides for our customers.

Tacking on to the latest badges that SLES received in October, SLES received Momentum Leader and Leader in the Server Virtualization category once again; Momentum Leader and High Performer in the Infrastructure as a Service category; and two badges in the Mid-Market Server Virtualization category for Best Support and High Performer.

In addition, SLE Desktop was again awarded two High Performer badges in the Mid-Market Operating System and Operating System categories. SLE Real Time also received a High Performer badge in the Operating System category. The openSUSE community distribution Leap was recognized as the Fastest Implementation in the Operating System category. It’s clear that our Business Critical Linux solutions continue to be the cornerstone of success for many of our customers and that we continue to provide excellent service for the open source community.

Similarly, as the use of hybrid, multi-cloud and cloud native infrastructures grows, many of our customers are looking to containers. For their business success, they look to Rancher, which has been the leading multi-cluster management for nearly a decade and has one of the strongest adoption rates in the industry.

G2 awarded Rancher four badges, including High Performer badges in the Container Management and the Small Business Container Management categories and Most Implementable and Easiest Admin in the Small Business Container Management category.

Here’s what some of our customers said in their reviews on G2:

“SLES the best [for] SAP environments. The support is fast and terrific.”

“[Rancher is a] complete package for Kubernetes.”

“RBAC simple management is one of the best upsides in Rancher, attaching Rancher post creation process to manage RBAC, ingress and [getting] a simple UI overview of what is going on.”

“ [Rancher is the] best tool for managing multiple production clusters of Kubernetes orchestration. Easy to deploy services, scale and monitor services on multiple clusters.”

Providing our customers with solutions that they know they can rely on and trust is critical to the work we do every day. These badges are a direct response to customer feedback and product reviews and underscore our ability to serve the needs of our customers for all of our solutions. I’m looking forward to seeing what new badges our team will be rewarded in the future as a result of their excellent work.