How To Simplify Your Kubernetes Adoption Using Rancher

Mittwoch, 1 Februar, 2023

Kubernetes has firmly established itself as the leading choice for container orchestration thanks to its robust ecosystem and flexibility, allowing users to scale their workloads easily. However, the complexity of Kubernetes can make it challenging to set up and may pose a significant barrier for organizations looking to adopt cloud native technology and containers as part of their modernization efforts.
 

In this blog post, we’ll look at how Rancher can help infrastructure operators simplify the process of adopting Kubernetes into their ecosystem. We’ll explore how Rancher provides a range of features and tools that make it easier to deploy, manage, and secure containerized applications and Kubernetes clusters.
 

Let’s start analyzing the main challenges for Kubernetes adoption and how Rancher tackles them.   

Challenge #1: Kubernetes is Complex 

One of the main challenges of adopting Kubernetes is the learning curve required to understand the orchestration platform and its implementation. Kubernetes has a large and complex codebase with many moving parts and a rapidly growing ecosystem. This can make it difficult for organizations to get up and running confidently, as these issues can blur the decisions required to determine the needed resources. Kubernetes talent remains difficult to source. Organizations with a preference for in-house, dedicated support may struggle to fill roles and scale the business growth at the speed they wish.
 

Utilizing a Kubernetes Management Platform (KMP) like Rancher can help alleviate some of these resourcing roadblocks by simplifying Kubernetes management and operations. Rancher’s provides a user-friendly web interface for managing Kubernetes clusters and applications, which can be used by developers and operations teams alike, and encourages domain specialists to upskill and transfer knowledge across teams.
 

Rancher also includes graphical cluster management, application templates, and one-click deployments, making it easier to deploy and manage applications hosted on Kubernetes and encouraging teams to utilize templatized processes to avoid over-complicating deployments. Rancher also has several built-in tools and integrations, such as monitoring, logging, and alerting, which can help teams get insights into their Kubernetes deployments faster.   

Challenge #2: Lack of Integration with Existing Tools and Workflows   

Another challenge of adopting Kubernetes is integrating an organization’s existing tools and workflows. Many teams already have various tools and processes to manage their applications and infrastructure, and introducing a new platform like Kubernetes can often disrupt these established processes.  

However, choosing a KMP like Rancher, which out-of-the-box integrates with multiple tools and platforms, from cloud providers to container registries, and continuous integration/continuous deployment (CI/CD) tools, enables organizations to adopt and implement Kubernetes alongside their existing stack. 

Challenge #3: Security is Now Top of Mind   

As more enterprises transition their stack to cloud native, security across Kubernetes environments has become top of mind for them. Kubernetes includes built-in basic security features, such as role-based access control (RBAC) and Pod Security Admission. However, learning to configure these features in addition to your stack’s existing security levels can be a maze at best and potentially expose weaknesses in your environment. Given Kubernetes’ dynamic nature, identifying, analyzing, and mitigating security incidents without the proper tools is a big challenge. 

 Rancher includes several protective features and integrations with security solutions to help organizations fortify their Kubernetes clusters and deployments. These include out-of-the-box support for RBAC, Authentication Proxy, CIS and vulnerability scanning, amongst others.  

 Rancher also provides integration with security-focused solutions, including SUSE NeuVector and Kubewarden.  

 

SUSE Neuvector provides comprehensive container security throughout the entire lifecycle, from development to production. It scans container registries and images and uses behavioral-based zero-trust security policies and advanced Deep Packet Inspection technology to prevent attacks from spreading or reaching the applications at the network level. This enables teams to implement zero-trust practices across their container environments easily. 

 

Kubewarden is a CNCF incubating project that delivers policy-as-code. Leveraging the power of WASM, Kubewarden allows writing security policies in your language of choice (Rego, Rust, Go, Swift, …) and controls policies not just during deployment but also handling mutations and runtime modifications.  

 

Both solutions help users build a better-fortified Kubernetes environment whilst minimizing the operational overhead needed to maintain a productive environment.   

Rancher’s out-of-the-box monitoring and auditing capabilities for Kubernetes clusters and applications help organizations get real-time data to identify and address any potential security issues quickly, reducing operational downtime and preventing substantial impact on an organization’s bottom line.  

In addition to all the products and features, it is crucial to secure and harden our environments properly. Rancher has undergone the DISA certification process for its multi-cluster management solution and the RKE2 Kubernetes distributions, making them the only solutions currently certified in this space. As a result, you can use the DISA-approved STIG guides for Rancher and RKE2 to implement a customized hardening approach for your specific use case.  

Challenge #4: Management and Automation   

As the number of clusters and containerized applications grows, the complexity of automating, configuring, and securing the environments skyrockets. As more organizations choose to modernize with Kubernetes, the reliance on automation, compliance and security of deployments is becoming more critical. Teams need solutions that can help their organization scale safely.
 

Rancher includes Fleet, a continuous delivery tool that helps your organization implement GitOps practices. The benefits of using GitOps in Kubernetes include the following:  

  1. Version Control: Git provides a way to track and manage changes to the cluster’s desired state, making it easy to roll back or revert changes.  
  2. Encourages Collaboration: Git makes it easy for multiple team members to work on the same cluster configuration and review and approve changes before deployment.  
  3. Utilize Automation: By using Git as the source of truth, changes can be automatically propagated to the cluster, reducing the risk of human error.  
  4. Improve Visibility: Git provides an auditable history of changes to the cluster, making it easy to see who made changes, when, and why.   

Conclusion: 

Adopting Kubernetes doesn’t have to be hard. Finding reliable solutions like Rancher can help teams better manage their clusters and applications on Kubernetes. KMP platforms help reduce the entry barrier to adopting Kubernetes and help ease the transition from traditional IT to cloud native architectures. 
 

For Kubernetes users who need additional support and services, there is Rancher Prime – the complete product and support subscription package of Rancher. Enterprises adopting Kubernetes and utilizing Rancher Prime have seen substantial economic benefits, which you can learn more about in Forrester’s ‚Total Economic Impact‘ Report on Rancher Prime. 

Challenges and Solutions with Cloud Native Persistent Storage

Mittwoch, 18 Januar, 2023

Persistent storage is essential for any account-driven website. However, in Kubernetes, most resources are ephemeral and unsuitable for keeping data long-term. Regular storage is tied to the container and has a finite life span. Persistent storage has to be separately provisioned and managed.

Making permanent storage work with temporary resources brings challenges that you need to solve if you want to get the most out of your Kubernetes deployments.

In this article, you’ll learn about what’s involved in setting up persistent storage in a cloud native environment. You’ll also see how tools like Longhorn and Rancher can enhance your capabilities, letting you take full control of your resources.

Persistent storage in Kubernetes: challenges and solutions

Kubernetes has become the go-to solution for containers, allowing you to easily deploy scalable sites with a high degree of fault tolerance. In addition, there are many tools to help enhance Kubernetes, including Longhorn and Rancher.

Longhorn is a lightweight block storage system that you can use to provide persistent storage to Kubernetes clusters. Rancher is a container management tool that helps you with the challenges that come with running multiple containers.

You can use Rancher and Longhorn together with Kubernetes to take advantage of both of their feature sets. This gives you reliable persistent storage and better container management tools.

How Kubernetes handles persistent storage

In Kubernetes, files only last as long as the container, and they’re lost if the container crashes. That’s a problem when you need to store data long-term. You can’t afford to lose everything when the container disappears.

Persistent Volumes are the solution to these issues. You can provision them separately from the containers they use and then attach them to containers using a PersistentVolumeClaim, which allows applications to access the storage:

Diagram showing the relationship between container application, its own storage and persistent storage courtesy of James Konik

However, managing how these volumes interact with containers and setting them up to provide the combination of security, performance and scalability you need bring further issues.

Next, you’ll take a look at those issues and how you can solve them.

Security

With storage, security is always a key concern. It’s especially important with persistent storage, which is used for user data and other critical information. You need to make sure the data is only available to those that need to see it and that there’s no other way to access it.

There are a few things you can do to improve security:

Use RBAC to limit access to storage resources

Role-based access control (RBAC) lets you manage permissions easily, granting users permissions according to their role. With it, you can specify exactly who can access storage resources.

Kubernetes provides RBAC management and allows you to assign both Roles, which apply to a specific namespace, and ClusterRoles, which are not namespaced and can be used to give permissions on a cluster-wide basis.

Tools like Rancher also include RBAC support. Rancher’s system is built on top of Kubernetes RBAC, which it uses for enforcement.

With RBAC in place, not only can you control who accesses what, but you can change it easily, too. That’s particularly useful for enterprise software managers who need to manage hundreds of accounts at once. RBAC allows them to control access to your storage layer, defining what is allowed and changing those rules quickly on a role-by-role level.

Use namespaces

Namespaces in Kubernetes allow you to create groups of resources. You can then set up different access control rules and apply them independently to each namespace, giving you extra security.

If you have multiple teams, it’s a good way to stop them from getting in each other’s way. It also keeps its resources private to their namespace.

Namespaces do provide a layer of basic security, compartmentalizing teams and preventing users from accessing what you don’t want them to.

However, from a security perspective, namespaces do have limitations. For example, they don’t actually isolate all the shared resources that the namespaced resources use. That means if an attacker gets escalated privileges, they can access resources on other namespaces served by the same node.

Scalability and performance

Delivering your content quickly provides a better user experience, and maintaining that quality as your traffic increases and decreases adds an additional challenge. There are several techniques to help your apps cope:

Use storage classes for added control

Kubernetes storage classes let you define how your storage is used, and there are various settings you can change. For example, you can choose to make classes expandable. That way, you can get more space if you run out without having to provision a new volume.

Longhorn has its own storage classes to help you control when Persistent Volumes and their containers are created and matched.

Storage classes let you define the relationship between your storage and other resources, and they are an essential way to control your architecture.

Dynamically provision new persistent storage for workloads

It isn’t always clear how much storage a resource will need. Provisioning dynamically, based on that need, allows you to limit what you create to what is required.

You can have your storage wait until a container that uses it is created before it’s provisioned, which avoids the wasted overhead of creating storage that is never used.

Using Rancher with Longhorn’s storage classes lets you provision storage dynamically without having to rely on cloud services.

Optimize storage based on use

Persistent storage volumes have various properties. Their size is an obvious one, but latency and CPU resources also matter.

When creating persistent storage, make sure that the parameters used reflect what you need to use it for. A service that needs to respond quickly, such as a login service, can be optimized for speed.

Using different storage classes for different purposes is easier when using a provider like Longhorn. Longhorn storage classes can specify different disk technologies, such as NVME, SSD, or rotation, and these can be linked to specific nodes allowing you to match storage to your requirements closely.

Stability

Building a stable product means getting the infrastructure right and aggressively looking for errors. That way, your product quality will be as high as possible.

Maximize availability

Outages cost time and money, so avoiding them is an obvious goal.

When they do occur, planning for them is essential. With cloud storage, you can automate reprovisioning of failed volumes to minimize user disruption.

To prevent data loss, you must ensure dynamically provisioned volumes aren’t automatically deleted when a resource is done with them. Kubernetes enables the use protection on volumes, so they aren’t immediately lost.

You can control the behavior of storage volumes by setting the reclaim policy. Picking the retain option lets you manually choose what to do with the data and prevents it from being deleted automatically.

Monitor metrics

As well as challenges, working with cloud volumes also offers advantages. Cloud providers typically include many strong options for monitoring volumes, facilitating a high level of observability.

Rancher makes it easier to monitor Kubernetes clusters. Its built-in Grafana dashboards let you view data for all your resources.

Rancher collects memory and CPU data by default, and you can break this data down by workload using PromQL queries.

For example, if you wanted to know how much data was being read to a disk by a workload, you’d use the following PromQL from Rancher’s documentation:


sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)

Longhorn also offers a detailed selection of metrics for monitoring nodes, volumes, and instances. You can also check on the resource usage of your manager, along with the size and status of backups.

The observability these metrics provide has several uses. You should log any detected errors in as much detail as possible, enabling you to identify and solve problems. You should also monitor performance, perhaps setting alerts if it drops below any particular threshold. The same goes for error logging, which can help you spot issues and resolve them before they become too serious.

Get the infrastructure right for large products

For enterprise-grade products that require fast, reliable distributed block storage, Longhorn is ideal. It provides a highly resilient storage infrastructure. It has features like application-aware snapshots and backups as well as remote replication, meaning you can protect your data at scale.

Longhorn provides enterprise-grade distributed block storage and facilitates deploying a highly resilient storage infrastructure. It lets you provision storage on the major cloud providers, with built-in support for AzureGoogle Cloud Platform (GCP) and Amazon Web Services (AWS).

Longhorn also lets you spread your storage over multiple availability zones (AZs). However, keep in mind that there can be latency issues if volume replicas reside in different regions.

Conclusion

Managing persistent storage is a key challenge when setting up Kubernetes applications. Because Persistent Volumes work differently from regular containers, you need to think carefully about how they interact; how you set things up impacts your application performance, security and scalability.

With the right software, these issues become much easier to handle. With help from tools like Longhorn and Rancher, you can solve many of the problems discussed here. That way, your applications benefit from Kubernetes while letting you keep a permanent data store your other containers can interact with.

SUSE is an open source software company responsible for leading cloud solutions like Rancher and Longhorn. Longhorn is an easy, fast and reliable Cloud native distributed storage platform. Rancher lets you manage your Kubernetes clusters to ensure consistency and security. Together, these and other products are perfect for delivering business-critical solutions.

SUSE Receives 15 Badges in the Winter G2 Report Across its Product Portfolio

Donnerstag, 12 Januar, 2023

 

 

 

 

 

I’m pleased to share that G2, the world’s largest and most trusted tech marketplace, has recognized our solutions in its 2023 Winter Report. We received a total of 15 badges across our business units for Rancher, SUSE Linux Enterprise Server (SLES), SLE Desktop and SLE Real Time – including the Users Love Us badge for all products – as well as three badges for the openSUSE community with Leap and Tumbleweed.

We recently celebrated 30 years of service to our customers, partners and the open source communities and it’s wonderful to keep the celebrations going with this recognition by our peers. Receiving 15 badges this quarter reinforces the depth and breadth of our strong product portfolio as well as the dedication that our team provides for our customers.

As the use of hybrid, multi-cloud and cloud native infrastructures grows, many of our customers are looking to containers. For their business success, they look to Rancher, which has been the leading multi-cluster management for nearly a decade and has one of the strongest adoption rates in the industry.

G2 awarded Rancher four badges, including High Performer badges in the Container Management and the Small Business Container Management categories and Most Implementable and Easiest Admin in the Small Business Container Management category.

Tacking on to the latest badges that SLES received in October, SLES received Momentum Leader and Leader in the Server Virtualization category once again; Momentum Leader and High Performer in the Infrastructure as a Service category; and two badges in the Mid-Market Server Virtualization category for Best Support and High Performer.

In addition, SLE Desktop was again awarded two High Performer badges in the Mid-Market Operating System and Operating System categories. SLE Real Time also received a High Performer badge in the Operating System category. The openSUSE community distribution Leap was recognized as the Fastest Implementation in the Operating System category. It’s clear that our Business Critical Linux solutions continue to be the cornerstone of success for many of our customers and that we continue to provide excellent service for the open source community.

Here’s what some of our customers said in their reviews on G2:

“[Rancher is a] complete package for Kubernetes.”

“RBAC simple management is one of the best upsides in Rancher, attaching Rancher post creation process to manage RBAC, ingress and [getting] a simple UI overview of what is going on.”

“ [Rancher is the] best tool for managing multiple production clusters of Kubernetes orchestration. Easy to deploy services, scale and monitor services on multiple clusters.”

“SLES the best [for] SAP environments. The support is fast and terrific.”

Providing our customers with solutions that they know they can rely on and trust is critical to the work we do every day. These badges are a direct response to customer feedback and product reviews and underscore our ability to serve the needs of our customers for all of our solutions. I’m looking forward to seeing what new badges our team will be rewarded in the future as a result of their excellent work.

 

Rancher Wrap: Another Year of Innovation and Growth

Montag, 12 Dezember, 2022

2022 was another year of innovation and growth for SUSE’s Enterprise Container Management business. We introduced significant upgrades to our Rancher and NeuVector products, launched new open source projects and matured others. Exiting 2022, Rancher remains the industry’s most widely adopted container management platform and SUSE remains the preferred vendor for enabling enterprise cloud native transformation. Here’s a quick look at a few key themes from 2022.  

Security Takes Center Stage 

As the container management market matured in 2022, container security took center stage.  Customers and the open source community alike voiced concerns around the risks posed by their increasing reliance on hybrid-cloud, multi-cloud, and edge infrastructure. Beginning with the open sourcing of NeuVector, which we acquired in Q4 2021, in 2022 we continued to meet our customers’ most stringent security and assurance requirements, making strategic investments across our portfolio, including:  

  • Kubewarden – In June, we donated Kubewarden to the CNCF. Now a CNCF sandbox project, Kubewarden is an open source policy engine for Kubernetes that automates the management and governance of policies across Kubernetes clusters thereby reducing risk.  It also simplifies the management of policies by enabling users to integrate policy management into their CI/CD engines and existing infrastructure.  
  • SUSE NeuVector 5.1 – In November, we released SUSE Neuvector 5.1, further strengthening our already industry leading container security platform. 
  • Rancher Prime– Most recently, we introduced Rancher Prime, our new commercial offering, replacing SUSE Rancher.  Supporting our focus on security assurances, Rancher Prime offers customers the option of accessing their Rancher Prime software directly from a trusted private registry. Additionally, Rancher Prime FIPS-140-3 and SLSA Level 2 and 3 certifications will be finalized in 2023.

Open Source Continues to Fuel Innovation 

 Our innovation did not stop at security. In 2022, we also introduced new projects and matured others, including:  

  • Elemental – Fit for Edge deployments, Elemental is an open source project, that enables centralized management and operations of RKE2 and K3s clusters when deployed with Rancher. 
  • Harvester SUSE’s open-source cloud-native hyper-converged infrastructure (HCI) alternative to proprietary HCI is now utilized across more than 710+ active clusters. 
  • Longhorn – now a CNCF incubator project, Longhorn is deployed across more than 72,000 nodes. 
  • K3s – SUSE’s lightweight Kubernetes distribution designed for the edge which we donated to the CNCF, has surpassed 4 million downloads. 
  • Rancher Desktop – SUSE’s desktop-based container development environment for Windows, macOS, and Linux environments has surpassed 520,000 downloads and 4,000 GitHub stars since its January release. 
  • Epinio – SUSE’s Kubernetes-powered application development platform-as-a-service (PaaS) solution in which users you can deploy apps without setting up infrastructure yourself has surpassed 4,000 downloads and 300 stars on GitHub since its introduction in September. 
  • Opni – SUSE’s multi-cluster observability tool (including logging, monitoring and alerting) with AIOps has seen steady growth with over 75+ active deployments this year.  

 As we head into 2023, Gartner research indicates the container management market will grow ~25% CAGR to $1.4B in 2025. In that same time-period, 85% of large enterprises will have adopted container management solutions, up from 30% in 2022.  SUSE’s 30-year heritage in delivering enterprise infrastructure solutions combined with our market leading container management solutions uniquely position SUSE as the vendor of choice for helping organizations on their cloud native transformation journeys.  I can’t wait to see what 2023 holds in store! 

Q&A: How to Find Value at the Edge Featuring Michele Pelino

Dienstag, 6 Dezember, 2022

We recently held a webinar, “Find Value at the Edge: Innovation Opportunities and Use Cases,” where Forrester Principal Analyst Michele Pelino was our guest speaker. After the event, we held a Q&A with Pelino highlighting edge infrastructure solutions and benefits. Here’s a look into the interview: 

SUSE: What technologies (containers, Kubernetes, cloud native, etc.) enable workload affinity in the context of edge? 

Michele: The concept of workload affinity enables firms to deploy software where it runs best. Workload affinity is increasingly important as firms deploy AI code across a variety of specialized chips and networks. As firms explore these new possibilities, running the right workloads in the right locations — cloud, data center, and edge — is critical. Increasingly, firms are embracing cloud native technologies to achieve these deployment synergies. 

Many technologies enable workload affinity for firms — for example, cloud native integration tools and container platforms’ application architecture solutions that enable the benefits of cloud everywhere. Kubernetes, a key open source system, enables enterprises to automate deployment, as well as to scale and manage containerized applications in a cloud native environment. Kubernetes solutions also provide developers with software design, deployment, and portability strategies to extend applications in a seamless, scalable manner. 

SUSE: What are the benefits of using cloud native technology in implementing edge computing solutions? 

Michele: Proactive enterprises are extending applications to the edge by deploying compute, connectivity, storage, and intelligence close to where it’s needed. Cloud native technologies deliver massive scalability, as well as enable performance, resilience, and ease of management for critical applications and business scenarios. In addition, cloud functions can analyze large data sets, identify trends, generate predictive analytics models, and remotely manage data and applications globally. 

Cloud native apps can leverage development principles such as containers and microservices to make edge solutions more dynamic. Applications running at the edge can be developed, iterated, and deployed at an accelerated rate, which reduces the time it takes to launch new features and services. This approach improves end user experience because updates can be made swiftly. In addition, when connections are lost between the edge and the cloud, those applications at the edge remain up to date and functional. 

SUSE: How do you mitigate/address some of the operational challenges in implementing edge computing at scale? 

Michele: Edge solutions make real-time decisions across key operational processes in distributed sites and local geographies. Firms must address key impacts on network operations and infrastructure. It is essential to ensure interoperability of edge computing deployments, which often have different device, infrastructure, and connectivity requirements. Third-party partners can help stakeholders deploy seamless solutions across edge environments, as well as connect to the cloud when appropriate. Data centers in geographically diverse locations make maintenance more difficult and highlight the need for automated and orchestrated management systems spanning various edge environments. 

Other operational issues include assessing data response requirements for edge use cases and the distance between edge environments and computing resources, which impacts response times. Network connectivity issues include evaluating bandwidth limitations and determining processing characteristics at the edge. It is also important to ensure that deployment initiatives enable seamless orchestration and maintenance of edge solutions. Finally, it is important to identify employee expertise to determine skill-set gaps in areas such as mesh networking, software-defined networking (SDN), analytics, and development expertise. 

SUSE: What are some of the must-haves for securing the edge? 

Michele: Thousands of connected edge devices across multiple locations create a fragmented attack surface for hackers, as well as business-wide networking fabrics that interweave business assets, customers, partners, and digital assets connecting the business ecosystem. This complex environment elevates the importance of addressing edge security and implementing strong end-to-end security from sensors to data centers in order to mitigate security threats. 

Implementing a Zero Trust edge (ZTE) policy for networks and devices powering edge solutions using a least-privileged approach to access control addresses these security issues.[i] ZTE solutions securely connect and transport traffic using Zero Trust access principles in and out of remote sites, leveraging mostly cloud-based security and networking services. These ZTE solutions protect businesses from customers, employees, contractors, and devices at remote sites connecting through WAN fabrics to more open, dangerous, and turbulent environments. When designing a system architecture that incorporates edge computing resources, technology stakeholders need to ensure that the architecture adheres to cybersecurity best practices and regulations that govern data wherever it is located. 

SUSE: Once cloud radio access network (RAN) becomes a reality, will operators be able to monetize the underlying edge infrastructure to run customer applications side by side? 

Michele: Cloud RAN can enhance network versatility and agility, accelerate introduction of new radio features, and enable shared infrastructure with other edge services, such as multiaccess edge computing or fixed-wireless access. In the future, new opportunities will extend use cases to transform business operations and industry-focused applications. Infrastructure sharing will help firms reduce costs, enhance service scalability, and facilitate portable applications. RAN and cloud native application development will extend private 5G in enterprise and industrial environments by reducing latency from the telco edge to the device edge. Enabling compute functions closer to the data will power AI and machine-learning insights to build smarter infrastructure, smarter industry, and smarter city environments. Sharing insights and innovations through open source communities will facilitate evolving innovation in cloud RAN deployments and emerging applications that leverage new hardware features and cloud native design principles.
 

What’s next? 

Register and watch the “Find Value at the Edge: Innovation Opportunities and Use Cases” Webinar today! Also, get a complimentary copy of the Forrester report: The Future of Edge Computing.  

 

Harvester 1.1.0: The Latest Hyperconverged Infrastructure Solution

Mittwoch, 26 Oktober, 2022

The Harvester team is pleased to announce the next release of our open source hyperconverged infrastructure product. For those unfamiliar with how Harvester works, I invite you to check out this blog from our 1.0 launch that explains it further. This next version of Harvester adds several new and important features to help our users get more value out of Harvester. It reflects the efforts of many people, both at SUSE and in the open source community, who have contributed to the product thus far. Let’s dive into some of the key features.  

GPU and PCI device pass-through 

The GPU and PCI device pass-through experimental features are some of the most requested features this year and are officially live. These features enable Harvester users to run applications in VMs that need to take advantage of PCI devices on the physical host. Most notably, GPUs are an ever-increasing use case to support the growing demand for Machine Learning, Artificial Intelligence and analytics workloads. Our users have learned that both container and VM workloads need to access GPUs to power their businesses. This feature also can support a variety of other use cases that need PCI; for instance, SR-IOV-enabled Network Interface Cards can expose virtual functions as PCI devices, which Harvester can then attach to VMs. In the future, we plan to extend this function to support advanced forms of device passthrough, such as vGPU technologies.  

VM Import Operator  

Many Harvester users maintain other HCI solutions with a various array of VM workloads. And for some of these use cases, they want to migrate these VMs to Harvester. To make this process easier, we created the VM Import Operator, which automates the migration of VMs from existing HCI to Harvester. It currently supports two popular flavors: OpenStack and VMware vSphere. The operator will connect to either of those systems and copy the virtual disk data for each VM to Harvester’s datastore. Then it will translate the metadata that configures the VM to the comparable settings in Harvester.   

Storage network 

Harvester runs on various hardware profiles, some clusters being more compute-optimized and others optimized for storage performance. In the case of workloads needing high-performance storage, one way to increase efficiency is to dedicate a network to storage replication. For this reason, we created the Storage Network feature. A dedicated storage network removes I/O contention between workload traffic (pod-to-pod communication, VM-to-VM, etc.) and the storage traffic, which is latency sensitive. Additionally, higher capacity network interfaces can be procured for storage, such as 40 or 100 GB Ethernet.  

Storage tiering  

When supporting workloads requiring different types of storage, it is important to be able to define classes or tiers of storage that a user can choose from when provisioning a VM. Tiers can be labeled with convenient terms such as “fast” or “archival” to make them user-friendly. In turn, the administrator can then map those storage tiers to specific disks on the bare metal system. Both node and disk label selectors define the mapping, so a user can specify a unique combination of nodes and disks on those nodes that should be used to back a storage tier. Some of our Harvester users want to use this feature to utilize slower magnetic storage technologies for parts of the application where IOPS is not a concern and low-cost storage is preferred.

In summary, the past year has been an important chapter in the evolution of Harvester. As we look to the future, we expect to see more features and enhancements in store. Harvester plans to have two feature releases next year, allowing for a more rapid iteration of the ideas in our roadmap. You can download the latest version of Harvester on Github. Please continue to share your feedback with us through our community slack or your SUSE account representative.  

Learn more

Download our FREE eBook6 Reasons Why Harvester Accelerates IT Modernization Initiatives. This eBook identifies the top drivers of IT modernization, outlines an IT modernization framework and introduces Harvester, an open, interoperable hyperconverged infrastructure (HCI) solution.

IndustryFusion, der Digitalisierungsstandard für die Industrie – sicher und souverän

Mittwoch, 26 Oktober, 2022

Sicher digitalisieren im Mittelstand

Smart Factories, Intelligente Produkte und Services sind schon lange keine Zukunftsmusik mehr. Laut IDC werden in zwei Jahren rund 75 Milliarden intelligente Edge-Geräte online sein – viele davon in Industrie 4.0-Umgebungen. Die Digitalisierung schreitet in allen Industriebereichen voran, das sehen wir in der rasanten Entwicklung von Edge Computing-Szenarien. Eine Studie der Linux Foundation prognostiziert, dass der Bereich Edge Computing viermal grösser sein wird als die Cloud und dort bis 2025 75 Prozent der weltweiten Daten generieren werden.

Diese Entwicklung wirft nicht nur für mittelständische Produktionsunternehmen in Deutschland viele Frage auf. Mit welcher Strategie und mit welchen Lösungen können Unternehmen sicher und einfach digitalisieren? Welche Lösungen ermöglichen es, das Innovationspotential im Unternehmen voll auszuschöpfen? Welche Lösungen bieten Herstellerunabhängigkeit mit offenen Standards, in die alle Produktionsmaschinen eingebunden werden können – bei höchster Sicherheit für alle Daten und digital souverän?

 

IndustryFusion: ein offener Standard

Eine Antwort auf diese Fragen liefert die Vernetzungslösung IndustryFusion. Diese Open Source-Lösung wird in enger Zusammenarbeit von Maschinen- und Anlagenbauern, Komponentenherstellern, Softwareentwicklern sowie Vertretern aus Wissenschaft und Politik im Verband Industry Business Network 4.0 entwickelt. Mit IndustryFusion können Unternehmen so ihr  Innovationspotential mit den Möglichkeiten von Industrie 4.0 voll ausschöpfen. Als Industrie-Partner in der IndustryFusion Foundation sind SUSE und Intel eng in die Entwicklungen dieser Lösungen eingebunden.

Gestern konnten wir gemeinsam mit unseren Partnern unsere erste Vernetzungslösung für Maschinen- und Anlagenbauer auf der EuroBLECH in Hannover vorstellen. Zusammen mit unseren Partnern in der IndustryFusion Foundation* zeigen wir dort die Zukunft mit Industrie 4.0 und wie kleinere und mittlere Unternehmen davon profitieren können.

Die zentrale Messebotschaft lautet: „IndustryFusion: The new standard of digitalization“

IndustryFusion: warum Offenheit immer wichtig ist

SUSE ermöglicht es Kunden mit innovativen Open Source-Unternehmenslösungen einfach, sicher und in dem von ihnen gewählten Tempo zu digitalisieren – im Rechenzentrum, in der Cloud und im Edge Bereich. Daher sind wir mit unseren Technologien, die komplett auf offenen Standards basieren, bestens aufgestellt, um gemeinsam mit unseren Partnern in der IndustryFusion Foundation für den deutschen Mittelstand umgebungsübergreifende Lösungen für eine sichere Digitalisierung zu entwickeln.

Bei unseren gemeinsamen Smart-Factory und Industrie 4.0-Lösungen steht der Komfort der Kunden im Vordergrund: einfache Einführung und Wartung bei maximaler Datensicherheit. Ein schneller Return on Investment und messbare Verbesserungen in der Produktion sorgen dafür, dass Unternehmen ihr Innovationspotenzial voll ausschöpfen können.

SUSE steht seit 30 Jahren für Deutsches Engineering und Innovation. Wir sind ein europäisches Unternehmen, in Deutschland gegründet. Unsere Open Source-Lösungen für Unternehmen stehen für Unabhängigkeit und leisten einen wichtigen Beitrag zur Stärkung der digitalen Souveränität in Europa.

SUSE legt höchsten Wert auf Sicherheit und arbeitet unter anderem mit dem BSI  (Bundesamt für Sicherheit in der Informationstechnologie) zusammen. Dabei ist SUSE derzeit der einzige Anbieter eines aktuellen Allzweck-Linux-Betriebssystems, das eine sichere Software-Lieferkette aufweist und Common Criteria EAL 4+ zertifiziert ist. Darüber hinaus entwickeln wir im Cloud-native Bereich für Cloud und Edge umfassende Sicherheitslösungen, die unter anderem auf Zero-Trust basieren. Als einziger Anbieter kann SUSE eine anwendungsfallbasierte Edge-Lösung bereitstellen, die genau auf die Bedürfnisse des Kunden zugeschnitten ist, da für die unterschiedlichen Edge-Anwendungen – allgemeine Edge-Anwendungen, Telekommunikation

Wir haben umfangreiche Projekte im Industrie 4.0 und Edge Umfeld umgesetzt – so etwa mit Bosch, Claas, Continental oder Knorr-Bremse. Selbst im Weltraum sorgen wir dafür, dass Satelliten regelmässig mit sicheren Software-Updates versorgt werden und so eine längere Lebenszeit haben (Kongsberg Spacetec und Hypergiant).

Als Teil der IndustryFusion Foundation bringen wir unsere Expertise ein, damit produzierende Unternehmen einfach und sicher von Industrie 4.0 profitieren können.

 

Sie finden den Gemeinschaftsstand der IndustryFusion Partner vom 24.-28. Oktober auf der EuroBLECH in Hannover Halle 13 | Stand E112.

 

* Die 2020 gegründete IndustryFusion Foundation basiert auf der langjährigen und engen Zusammenarbeit Erstmals stellt die Foundation an ihrem Messestand auf der EuroBLECH nun Use-Cases vor, die auf der herstellerübergreifenden Open-Source-Lösung IndustryFusion zur intelligenten Vernetzung von Anlagen in einer industriellen Fertigung basieren. Diese wird es künftig auch kleinen und mittleren Unternehmen ermöglichen, von der zunehmenden Digitalisierung zu profitieren.

Holger Pfister ist Country Manager Deutschland bei SUSE

 

Leitfaden Kubernetes Security: Zehn Fragen, die Sie Ihrem Sicherheitsteam stellen sollten

Freitag, 21 Oktober, 2022

Containerisierte Anwendungen und Kubernetes-Infrastrukturen rücken mittlerweile immer stärker in den Fokus von Angreifern. Mit herkömmlichen Security-Tools sind die neuen Bedrohungen aber oft nicht zu erkennen. Der Security Guide von SUSE zeigt, wo die größten Risiken liegen – und wie Sie Ihre Umgebung umfassend schützen können.

Mit Container-Technologie und Werkzeugen wie Kubernetes können Unternehmen viele Aspekte der Anwendungsbereitstellung automatisieren. Dies hilft ihnen, ihr Business schnell an neue Anforderungen anzupassen. Moderne Anwendungsarchitekturen sind jedoch genauso anfällig für Angriffe und Exploits durch Hacker und Insider wie herkömmliche Umgebungen. Ransomware, Kryptojacking, Datendiebstahl und weitere Cyberrisiken bedrohen auch neue, containerbasierte Umgebungen in der Cloud.

 

Hinzu kommt, dass neue Tools und Technologien wie Kubernetes und verwaltete Container-Services in der Public Cloud oft zum Einfallstor für Angriffe auf die wertvollsten Daten und Assets eines Unternehmens werden. Seit den ersten Man-in-the-Middle-Attacken im Kubernetes-Umfeld und dem Exploit bei Tesla hat sich die Gefahrenlage für Unternehmen deutlich verschärft.

Die hyperdynamische Natur von Containern schafft insbesondere vier große Herausforderungen im Bereich Sicherheit:

  • Schwachstellen in der CI/CD-Pipeline: Es werden regelmäßig kritische Schwachstellen in den Open-Source-Komponenten einer CI/CD-Pipeline entdeckt. Diese können Container-Images von der Build-Phase bis zur Produktion betreffen. Über kompromittierte Container-Images gelingt es Cyberkriminellen immer wieder, Malware einzuschleusen oder sich unberechtigten Zugriff auf das Netzwerk zu verschaffen – ohne dass dies von traditionellen Sicherheitslösungen entdeckt wird.
  • Explosion des East-West-Traffic: Während monolithische Anwendungen durch herkömmliche Firewalls und Sicherheits-Tools auf dem Host geschützt werden können, entstehen bei Containern neue Risiken durch den stark ansteigenden internen Datenverkehr. Dieser sogenannte East-West-Traffic muss permanent auf mögliche Angriffe überwacht werden.
  • Vergrößerte Angriffsfläche: Jeder einzelne Container kann eine andere Schwachstelle haben, die einen Exploit ermöglicht. Auch die zusätzliche Angriffsfläche, die durch Container-Orchestrierungstools wie Kubernetes entsteht, muss berücksichtigt werden. Neue Methoden richten sich oft direkt gegen die Kubernetes-Infrastruktur und versuchen, Komponenten wie API-Server oder Kubelets zu attackieren.
  • Hohes Tempo der Veränderung: Häufig können traditionelle Methoden und Tools für die IT-Sicherheit nicht mit der Dynamik einer sich ständig verändernden Container-Umgebung mithalten. Meist dauert es nur Minuten oder Sekunden, bis neue Container oder Pods verfügbar sind. Dies hat auch immer wieder Auswirkungen auf das Verhalten von Anwendungen und Netzwerkverbindungen. Um ihre Umgebung umfassend zu schützen, benötigen Unternehmen automatisierte Tools der nächsten Generation, die Sicherheitsrichtlinien frühzeitig in der Pipeline anwenden und als Code verwalten.

Ist Ihr Unternehmen in der Lage, Ihre Container-Umgebung vor neuen Risiken zu schützen und Angriffe erfolgreich abzuwehren? Das Team, das für die Sicherheit Ihrer Kubernetes-Infrastruktur verantwortlich ist, sollte insbesondere die folgenden zehn Fragen beantworten können:

  1. Verfügen Sie über einen Prozess, um kritische Schwachstellen möglichst schnell mit verfügbaren Fixes zu beseitigen – und dies auch schon in der Build-Phase Ihrer Pipeline?
  2. Haben Sie Einblicke in alle bereitgestellten Kubernetes-Pods? Wissen Sie zum Beispiel, wie sich die Pods im Normalbetrieb verhalten und wie Pods und Cluster miteinander kommunizieren?
  3. Können Sie potenziell schädliches Verhalten im internen Datenverkehr zwischen einzelnen Containern erkennen?
  4. Werden Sie gewarnt, wenn interne Service-Pods oder Container beginnen, Ports intern zu scannen oder wahllos versuchen, sich mit dem externen Netzwerk zu verbinden?
  5. Wie erkennen Sie, ob möglicherweise ein Angreifer in Ihre Container, Pods oder Hosts eingedrungen ist?
  6. Sind Sie in der Lage, die Netzwerkverbindungen Ihrer Container-Umgebung im gleichen Maße zu überprüfen wie Ihr übriges Netzwerk? Zum Beispiel auf Layer 7?
  7. Haben Sie einen Überblick über alle Vorgänge innerhalb eines Pods oder Containers, um festzustellen, ob es möglicherweise einen Exploit gibt?
  8. Überprüfen Sie regelmäßig die Zugriffsrechte auf Ihre Kubernetes-Cluster, um mögliche Angriffswege für Insider zu verstehen?
  9. Verfügen Sie über eine Checkliste für die Sperrung von Kubernetes-Diensten, rollenbasierten Zugriffsberechtigungen und Container-Hosts?
  10. Wie lokalisieren Sie beim Troubleshooting oder bei der Aufzeichnung forensischer Daten den problematischen Pod und erfassen seine Log-Daten? Können Sie auch den Rohdatenverkehr erfassen und analysieren, bevor er verschwindet?

Sich die Zeit zu nehmen, die richtigen Fragen zur richtigen Zeit zu stellen, ist ein wichtiger Schritt auf dem Weg zu umfassender Container-Sicherheit. In unseremLeitfaden „The Ultimate Guide to Kubernetes Security“ erfahren Sie mehr über das Thema.

Der Leitfaden hilft Ihnen, typische Angriffsmethoden und Exploits im Kubernetes-Umfeld zu verstehen, und gibt Einblicke in reale Sicherheitsvorfälle. Zudem enthält der Guide eine vollständige Checkliste mit Maßnahmen zur Absicherung Ihrer Infrastruktur.

Jetzt den Leitfaden „The Ultimative Guide to Kubernetes Security“ herunterladen

How to Deliver a Successful Technical Presentation: From Zero to Hero

Mittwoch, 12 Oktober, 2022

Introduction

I had the chance to talk about Predictive Autoscaling Patterns with Kubernetes at the Container Days 22 Conference in September of 2022.  I delivered the talk with a former colleague in Hamburg, Germany, and was an outstanding experience! The entire process of delivering the talk began when the Call for Papers opened back in March 2022. My colleague and I worked together, playing with the technology, better understanding the components and preparing the labs. 

In this article, I will discuss my experiences, lessons learned and suggestions for providing a successful technical presentation. 

My Experiences

As a Cloud Consultant in a previous role, I have attended events, such as the CNCF KubeCon and the Open Source Infra Summit. I also helped in workshops, serving as a booth staff performing demos and introducing the product to the attendees. Public speaking was something that always piqued my interest, but I didn’t know where to start. 

One of my previous duties was to provide technical expertise to customers and help sales organizations identify potential solutions and create workshops to work with the customers. Doing this gave me a unique opportunity to introduce myself to the process of speaking; I found it interesting and a great source of self-reflection.

Developing communication skills is not something you can learn just by taking a training course or listening to others doing it. I consider rehearsal mandatory, as I always learn something new every time. However, the best way to develop communication skills is to deliver content. 

How to Select the Right Topic 

Selecting the right topic for a speech is one of the first things you should consider. The topic should be a mix of something you are comfortable with and something you have enough technical background knowledge of; it does not need to be work-related, just something you find interesting and want to discuss. 

I delivered a talk with a former colleague, Roberto Carratalá, who works for a competitor. Right now, some of the most-used technologies (Kubernetes, its SIGs, programming languages, Kubevirt and many others) are open sourced projects with no direct companies involved. Talking about the technologies can open new windows to selecting an agnostic topic you and your co-speaker could discuss. Don’t let companies‘ differences get in your way of providing a great talk.

In our case, we decided to move on with Vertical Pod Autoscaler (VPA) and our architecture around it. We utilized examples and created use cases to showcase. It is important to narrow down the concept to real use cases so the audience can link with their own use cases, and it can also serve as a baseline for the audience to adapt to their customers. 

VPA is a technology-agnostic vendor that can be used within a vendor distribution with minimum changes. You could consider talking about this technology, which can be applied to a vendor-specific product. 

Whether you are an Engineer, Project Manager, Architect, Consultant or hold a non-technical role, we are all involved in IT. Within your area of specialization, you can talk about your experiences, what you learned, how you performed or even the challenges you faced explaining the process.

From “How to contribute to an Open Source project” to “How to write eBPF programs with Golang,” a different audience will be called. 

Here are some ideas: 

  • Have you recently had a good experience with a tool or project and want to share your experiences? 
  • Did you overcome a downtime situation with your customer? What a good experience to share! 
  • Business challenges and how you faced them. 
  • Are you a maintainer or contributor to a project? Take your chance and generate some hype among developers about your project. 

The bottom line is to not underestimate yourself and share your experiences; we all grow when we share! 

Practice Makes Perfect

In my experience, taking the time to practice and record yourself is important. Every time I reviewed my own recording, I found opportunities for improvement. Rehearse your delivery!

I had to understand that there is no „perfect word“ to use; there is no better way to explain yourself than when you feel comfortable speaking about the topic. Use language you are comfortable with, and the audience will appreciate your understanding. 

Repeat your talk, stand up and try to feel comfortable while you’re speaking. Become familiar with the sound of your voice and the content flow. Once you feel comfortable enough, deliver to your partner, your family or even close friends. It was a wonderful opportunity to get initial feedback in a friendly environment, which greatly helped me.

The Audience 

Talking to hundreds or even thousands of attendees is a great challenge but can be frightening. Try to remember that all these people are there because they’re interested in the content you created. They are not expecting to become experts after the talk, nor do they want or expect you to fail. Don’t be afraid to find ‘your’ space on the stage so that you feel more comfortable. Always tell the audience that you’re excited to be at the event and looking forward to sharing your knowledge and experience with them. Speak freely, and remember to have fun while you do! 

Own the content; a speech is not a script. Don’t expect to remember every word that you wrote because it will feel very wooden. Try to riff on your content – evolve it every time it’s delivered, sharpening the emphasis of certain sections or dropping in a bit of humor along the way.  Make sure each time you give the speech it’s a unique experience. 

The Conference 

The time has come: I overcame the lack of self-confidence and all the doubts. It was time to polish up the final details before giving the speech. 

First, I found it useful to familiarize myself with the speaking room. If you are not told to stay in the same place (like a lectern or a marked spot on the stage), spend some time walking around the room, looking at the empty chairs, imagining yourself delivering the speech, and breathe slowly and deeply to reduce any anxiety that you feel. 

While delivering a talk is not 100% a conversation, attempt to talk to the audience; don’t focus on the first few rows and forget about the rest of the auditorium. Look at different parts of the audience when you are talking, make eye contact with them and ask questions. If possible, try to make it interactive. 

The last part of the speech usually consists of a question-and-answer section. One of the most common fears is around “what if they ask something I don’t know?“ Remember that no one expects you to know everything, so don’t be afraid to recognize you don’t know something. Some questions can be tricky or too long to answer; just calm down and point to the right resources where they can find the answers from the source directly. 

We got many questions, which shocked me because that proved that the audience was interested.  It was fun to answer many questions and interact with the audience. 

Don’t be in a rush, talk about the content and take your time to breathe while you are speaking. Remind yourself you wrote the content, you own the content and nobody was forced to attend your talk; they attended freely because your content is worth it!  

Conclusion 

Overall, my speaking experiences were outstanding! I delivered mine with my former colleague and friend Roberto Carratalá, and we both really enjoyed the experience. We received good feedback, including some improvements to consider for our future speeches. 

I will submit to the next call for papers, whether it is standalone or co-speaking. So get out there and get speaking!