A Path to Legacy Application Modernization Through Kubernetes

Wednesday, 6 July, 2022

These legacy applications may have multiple services bundled into the same deployment unit without a logical grouping. They’re challenging to maintain since changes to one part of the application require changing other tightly coupled parts, making it harder to add or modify features. Scaling such applications is also tricky because to do so requires adding more hardware instances connected to load balancers. This takes a lot of manual effort and is prone to errors.

Modernizing a legacy application requires you to visualize the architecture from a brand-new perspective, redesigning it to support horizontal scaling, high availability and code maintainability. This article explains how to modernize legacy applications using Kubernetes as the foundation and suggests three tools to make the process easier.

Using Kubernetes to modernize legacy applications

A legacy application can only meet a modern-day application’s scalability and availability requirements if it’s redesigned as a collection of lightweight, independent services.

Another critical part of modern application architecture is the infrastructure. Adding more server resources to scale individual services can lead to a large overhead that you can’t automate, which is where containers can help. Containers are self-contained, lightweight packages that include everything needed for a service to run. Combine this with a cluster of hardware instances, and you have an infrastructure platform where you can deploy and scale the application runtime environment independently.

Kubernetes can create a scalable and highly available infrastructure platform using container clusters. Moving legacy applications from physical or virtual machines to Kubernetes-hosted containers offers many advantages, including the flexibility to use on-premises and multi-cloud environments, automated container scheduling and load balancing, self-healing capability, and easy scalability.

Organizations generally adopt one of two approaches to deploy legacy applications on Kubernetes: using virtual machines and redesigning the application.

Using virtual machines

A monolith application’s code and dependencies are embedded in a virtual machine (VM) so that images of the VM can run on Kubernetes. Frameworks like Rancher provide a one-click solution to run applications this way. The disadvantage is that the monolith remains unchanged, which doesn’t achieve the fundamental principle of using lightweight container images. It is also possible to run part of the application in VMs and containerize the less complex ones. This hybrid approach helps to break down the monolith to a smaller extent without huge effort in refactoring the application. Tools like Harvester can help while managing the integration in this hybrid approach.

Redesigning the application

Redesigning a monolithic application to support container-based deployment is a challenging task that involves separating the application’s modules and recreating them as stateless and stateful services. Containers, by nature, are stateless and require additional mechanisms to handle the storage of state information. It’s common to use the distributed storage of the container orchestration cluster or third-party services for such persistence.

Organizations are more likely to adopt the first approach when the legacy application needs to move to a Kubernetes-based solution as soon as possible. This way, they can have a Kubernetes-based solution running quickly with less business impact and then slowly move to a completely redesigned application. Although Kubernetes migration has its challenges, some tools can simplify this process. The following are three such solutions.

Rancher

Rancher provides a complete container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere. It’s designed to simplify the operational challenges of running multiple Kubernetes clusters across different infrastructure environments. Rancher provides developers with a complete Kubernetes environment, irrespective of the backend, including centralized authentication, access control and observability features:

  • Unified UI: Most organizations have multiple Kubernetes clusters. DevOps engineers can sometimes face challenges when manually provisioning, managing, monitoring and securing thousands of cluster nodes while establishing compliance. Rancher lets engineers manage all these clusters from a single dashboard.
  • Multi-environment deployment: Rancher helps you create Kubernetes clusters across multiple infrastructure environments like on-premises data centers, public clouds and edge locations without needing to know the nuances of each environment.
  • App catalog: The Rancher app catalog offers different application templates. You can easily roll out complex application stacks on top of Kubernetes with the click of a button. One example is Longhorn, a distributed storage mechanism to help store state information.
  • Security policies and role-based access control: Rancher provides a centralized authentication mechanism and role-based access control (RBAC) for all managed clusters. You can also create pod-level security policies.
  • Monitoring and alerts: Rancher offers cluster monitoring facilities and the ability to generate alerts based on specific conditions. It can help transport Kubernetes logs to external aggregators.

Harvester

Harvester is an open source, hyperconverged infrastructure solution. It combines KubeVirt, a virtual machine add-on, and Longhorn, a cloud native, distributed block storage add-on along with many other cloud native open source frameworks. Additionally, Harvester is built on Kubernetes itself.

Harvester offers the following benefits to your Kubernetes cluster:

  • Support for VM workloads: Harvester enables you to run VM workloads on Kubernetes. Running monolithic applications this way helps you quickly migrate your legacy applications without the need for complex cluster configurations.
  • Cost-effective storage: Harvester uses directly connected storage drives instead of external SANs or cloud-based block storage. This helps significantly reduce costs.
  • Monitoring features: Harvester comes with Prometheus, an open source monitoring solution supporting time series data. Additionally, Grafana, an interactive visualization platform, is a built-in integration of Harvester. This means that users can see VM or Kubernetes cluster metrics from the Harvester UI.
  • Rancher integration: Harvester comes integrated with Rancher by default, so you can manage multiple Harvester clusters from the Rancher management UI. It also integrates with Rancher’s centralized authentication and RBAC.

Longhorn

Longhorn is a distributed cloud storage solution for Kubernetes. It’s an open source, cloud native project originally developed by Rancher Labs, and it integrates with the Kubernetes persistent volume API. It helps organizations use a low-cost persistent storage mechanism for saving container state information without relying on cloud-based object storage or expensive storage arrays. Since it’s deployed on Kubernetes, Longhorn can be used with any storage infrastructure.

Longhorn offers the following advantages:

  • High availability: Longhorn’s microservice-based architecture and lightweight nature make it a highly available service. Its storage engine only needs to manage a single volume, dramatically simplifying the design of storage controllers. If there’s a crash, only the volume served by that engine is affected. The Longhorn engine is lightweight enough to support as many as 10,000 instances.
  • Incremental snapshots and backups: Longhorn’s UI allows engineers to create scheduled jobs for automatic snapshots and backups. It’s possible to execute these jobs even when a volume is detached. There’s also an adequate provision to prevent existing data from being overwritten by new data.
  • Ease of use: Longhorn comes with an intuitive dashboard that provides information about volume status, available storage and node status. The UI also helps configure nodes, set up backups and change operational settings.
  • Ease of deployment: Setting up and deploying Longhorn just requires a single click from the Rancher marketplace. It’s a simple process, even from the command-line interface, because it involves running only certain commands. Longhorn’s implementation is based on the container storage interface (CSI) as a CSI plug-in.
  • Disaster recovery: Longhorn supports creating disaster recovery (DR) volumes in separate Kubernetes clusters. When the primary cluster fails, it can fail over to the DR volume. Engineers can configure recovery time and point objectives when setting up that volume.
  • Security: Longhorn supports data encryption at rest and in motion. It uses Kubernetes secret storage for storing the encryption keys. By default, backups of encrypted volumes are also encrypted.
  • Cost-effectiveness: Being open source and easily maintainable, Longhorn provides a cost-effective alternative to the cloud or other proprietary services.

Conclusion

Modernizing legacy applications often involves converting them to containerized microservice-based architecture. Kubernetes provides an excellent solution for such scenarios, with its highly scalable and available container clusters.

The journey to Kubernetes-hosted, microservice-based architecture has its challenges. As you saw in this article, solutions are available to make this journey simpler.

SUSE is a pioneer in value-added tools for the Kubernetes ecosystem. SUSE Rancher is a powerful Kubernetes cluster management solution. Longhorn provides a storage add-on for Kubernetes and Harvester is the next generation of open source hyperconverged infrastructure solutions designed for modern cloud native environments.

SUSE Rancher removes roadblocks to transformation

Tuesday, 5 July, 2022

C-suite leaders across all industries are under increasing pressure to transform and innovate. Yet the journey toward unlocking new agility and efficiency, while meeting new customer demands, is often hampered by technology and process roadblocks.

 

We know the short-term implications of transformation can be initially overwhelming for many technology teams. Stakeholders expect any transformation to quickly deliver value and minimise disruption.

 

However, DevOps teams are now tasked with attempting to unify their IT operations with Kubernetes, the widely-regarded new operating system for cloud-native development. With Gartner predicting that more than 75% of global organisations will run containerised applications in production this year, Kubernetes offers higher reliability on any infrastructure and enables DevOps teams to streamline processes.

 

While there are many benefits of leveraging Kubernetes and containers, users face a unique set of challenges:

 

  • Lack of real-time visibility over multiple Kubernetes clusters.
  • Inconsistent security policies that create enforcement and compliance risks.
  • Increased overheads for independently managing this growing ecosystem.

 

To combat these challenges and meet the ongoing demands set by internal stakeholders (and even shareholders), our customers and the community turn to SUSE’s Kubernetes management solution, SUSE Rancher.

 

Supercharging Ubisoft’s innovation and operational efficiency

 

A highly recognisable brand with countless moving parts, Ubisoft needed a way to power up its innovation and operational efficiency to continue to meet global growth ambitions. To boost innovation and drive management efficiencies, Ubisoft put SUSE Rancher at the heart of its central KaaS hub, Ubisoft Kubernetes Service (UKS).

 

Ubisoft utilises SUSE Rancher to give its internal teams the ability to create new services and applications for internal stakeholders and customers in real time. This enables them to:

  • Unlock 20% reduction in support ticket resolution time.
  • Reduce cluster deployment from days to minutes – an 80% reduction.
  • Tackle the challenge of growth head-on to gain a competitive advantage in the market.

 

Driving agility and scale at DIMICO

 

Driving development agility at scale is DIMOCO’s top priority. As an industry leader, DIMOCO is at the cutting edge of mobile technology. One of its strategic transformation objectives is finding better ways to galvanise and support its legion of developers, while simplifying systems management processes.

 

Facilitating an incredible number of transactions – around 2 million every day – DIMOCO had a clear objective from the outset for how it would drive development agility at scale. This was a top priority for its engineering team, aiming to better support the growth ambitions of the business at large. With SUSE Rancher, the team was able to deliver:

 

  • 80% reduction in systems management time.
  • 75% reduction in systems maintenance and update time.
  • Clusters created in minutes and deployed with minimal human intervention.

 

Innovating for our customers is central to everything we do here at SUSE. SUSE Rancher is a pioneer in the container management space, and as more companies discover the need to integrate Kubernetes into their strategy, our customers know they can count on us to drive innovation across their business. If you’d like to learn how SUSE Rancher can better support your development teams to deliver faster and more efficiently, feel free to find out more here.

 

About the author

As the chief operating officer for SUSE APAC, Aidan Brecknell is focused on enabling SUSE’s APAC team to deliver on the strategic vision of delivering innovative, reliable and secure open source solutions that allow its customers to Innovate Everywhere. With more than a decade of experience as a senior executive and strategic consultant across the enterprise technology sector, Aidan understands the complex challenges SUSE’s customers are facing and how SUSE can best assist them in their ongoing transformation.

 

Manage without Disruption: Introducing SUSE Manager 4.3

Tuesday, 21 June, 2022

In a world of disruptions, the new SUSE Linux Enterprise (SLE) brings innovation without disruption.  Our new Linux helps our customers and partners stay ahead of cyberattacks with advanced supply chain security and confidential computing.

But what is a secure system worth if you can’t keep it that way?  Today, we’re excited to launch SUSE Manager 4.3 to keep your systems secure — no matter which Linux distro you are running or where it is located.

What’s SUSE Manager?

SUSE Manager is the only infrastructure management solution that manages a mixed Linux estate and ensures that it’s secure, scalable, and reaches everywhere you must manage Linux. With SUSE Manager, you can monitor, manage, and secure your Linux infrastructure without disruption, difficulty, or manual operations that are both expensive and risky. Ensure compliance with centralized control and do it at scale—anywhere from ten to one million clients: Nothing falls through the cracks.

Keep your entire mixed Linux environment—from RHEL to SLE and more—healthy and current with automation and scheduled IT tasks, making it possible for IT teams to spend less time going server to server and more time innovating for the business.

What’s New in SUSE Manager 4.3?

According to recent polls, chief culprits of unplanned downtime and increased cyber-attacks are human error, migration issues, and unpatched infrastructure.  We addresses these issues in three different ways.

 Security through Automation

SUSE Manager secures the entire infrastructure with automated patch and configuration manage and provides auto remediation through sensors and beacons.  The new SUSE Manager validates against SCAP (Secure Content Automation Protocol) and CVE (Common Vulnerabilities and Exposures) profiles.  It then enables you to automate or schedule updates – based on your individual company regulations. Finally, you can integrate SUSE Manager 4.3 with SLE Live Patching, which enables you to patch without experiencing downtime.

Centralized Scalability

With the release of 4.3, you get centralized centralized control and monitoring at any scale.  Manage your environment as a cohesive whole. That is, you manage your on-premises, cloud, container, edge and IoT infrastructure as a single estate. New features for the Hub architecture allow management up to 1M clients.  And the rearchitected proxy and branch server enables you to deploy SUSE Manager 4.3 in even the most constrained environments, like on the edge or in retail environments. Regardless of the size of your environment, SUSE Manager 4.3 lets you run searchable, portfolio-wide reports so you can always know that your systems are secure, healthy, and compliant.

With improvements in monitoring, SUSE Manager provides integration with Prometheus for real time monitoring and Grafana display results through graphical dashboards. And new HTTP APIs you can easily extend and integrate SUSE Manager 4.3 with the your other solutions.

Freedom of Choice

This has always been SUSE Manager’s sweet spot.  And with the release of SUSE Manager 4.3, we continue to add Linux distros to our support list.  This includes new versions of RHEL, Oracle, Ubuntu, and the new CentOS clones – Rocky Linux and Alma Linux.

SUSE Manager manages all these environments – no matter where they are located through a single console – so you continue to get centralized monitoring and automation.

Learn more!

We invite you to learn more about the new SUSE Manager!  Find out why our customer, Pole Emploi says:

“SUSE Manager gives us a much clearer view across the estate when it comes to responding to our security teams.  We use the built-in OpenSCAP auditing tool to check status and cross-reference with the data held by the security teams.”

Read the data sheet

Visit the website

Download SUSE Manager

Why Data Management Strategies Are So Important For You

Tuesday, 21 June, 2022

Guest blog by Shripad Hegde

Data has been the driver of revenue and success in many enterprises. The Data Volume has increased dramatically and is expected to rise with 5G technologies being rolled out. It is more crucial now than ever for organizations to have a clear roadmap on different data management strategies that would influence their success in this competitive market. There are endless use cases and possibilities of capitalizing on data be it automation to maximize impact or elevating the user experience of your customers.

SAP customers have to be prepared for a major migration to SAP HANA. Since most companies have legacy data this migration would not be so easy. Read the Benchmark Report to learn about the challenges faced and the best methodology to deal with legacy data.

Download the SAPinsider Report

From March to April 2022, SAPinsider surveyed 138 IT professionals around the world to gain insights into their :

  • Data Strategy
  • Aims
  • Difficulties

Survey highlights that SAP customers are adopting more & more data solutions such as databases, data warehouses, and data lakes, as well as a variety of relevant technologies to enable automation and analytics.

Many of these organizations are using critical measures to ensure data integrity, such as data orchestration and master data governance.

How to use Data Management Strategies for Business?

Without context and expertise, Data alone would not solve all of the problems. Every organization must start by identifying its needs and have a clear action plan for its data management.  Some key growth strategies include:

  • Craft a robust migration plan to SAP HANA
  • Create and implement a cloud data plan to enhance the cloud’s value while addressing privacy and security concerns.
  • Define key constraints to reduce risks and develop important metrics to measure the success of your data strategy.

This Benchmark report on ‘Data Management and SAP HANA‘ help you to gain a clear understanding on

  • What are the drivers or the challenges faced during the migration?
  • What are the required actions and the requirements?
  • Which are the technologies that are most critical for the business

Visibility and consistency are essential for container security

Tuesday, 21 June, 2022

As I discussed in my previous article, business and technology leaders are under more pressure than ever to transform. That pressure flows directly to development teams tasked with unlocking organisational agility and meeting the changing needs of customers.

 

Of course, containers are essential for cloud-native transformation and the operating system for cloud-native development is Kubernetes. Kubernetes platforms enable development teams to deploy at the speed and scale required for today’s organisations and transformations.

 

At the same time, this new landscape of containers and Kubernetes has introduced a relative level of complexity for DevOps teams who are also attempting to instil Zero Trust and DevSecOps practices. The “shift left” movement has seen DevOps teams working to ensure security is integrated at the earliest possible stage of the development cycle.

 

And for good reason.

 

A new landscape of threats

 

With no shortage of high-profile cyberattacks in the press each week, cybersecurity is on the mind of executives who wonder when they’ll be next. Unfortunately, a cyberattack or data breach has become a near inevitability for organisations of every size, and in every sector.

 

As digital environments evolve through ongoing transformation, cyber attackers evolve their tactics to exploit vulnerabilities in new platforms or applications. Unsurprisingly, Kubernetes and containers offer their own unique vectors to exploit, with non-profit security organisation The Shadowserver Foundation recently discovering that 84% of systems hosting Kubernetes are accessible via the internet.

 

Beyond the potential data loss and reputational damage that comes from a cyberattack, there is also a raft of security and privacy regulations for large organisations to contend with, including PCI-DSS, SOC-2, and GDPR – all of which have strict requirements for automated compliance scanning and reporting capabilities in production environments.

 

However, NeuVector (recently acquired by SUSE) found in their annual container security survey that only 20% of DevOps practitioners report using a compliance tool for their container and Kubernetes environments. Almost 75% of respondents also had concerns over their Kubernetes runtime security – including their risk of network attacks, man-in-the-middle attacks, and crypto mining. Solving these issues will require automated tools that offer new levels of visibility and security for Kubernetes environments.

 

Zero trust security through consistency and visibility

 

These challenges highlight why we’re so excited to have integrated SUSE NeuVector 5.0 with SUSE Rancher. Rancher users can now easily access and authenticate themselves to manage SUSE NeuVector directly through the Rancher console. This provides development teams with a complete zero-trust stack through a consistent user experience that simplifies security management for large, globally distributed Kubernetes environments.

 

Security will be a growing priority for business and technology leaders, yet we know they’re also reticent to have security be the handbrake on agility. As we’re seeing with many of our own customers, by providing DevOps teams with intuitive and automated security tools within their Kubernetes platform, they’re then free to focus on the rapid innovation that will drive competitive advantage.

 

About the author

As the Chief Operating Officer for SUSE APJ, I’m focused on enabling our team to deliver on the strategic vision for delivering cutting-edge Open Source solutions that allow our customers to Innovate Everywhere. With more than a decade of experience as a senior executive and strategic consultant across the enterprise technology sector, I bring my expertise from sectors such as supply chain, construction, and engineering to understand the complex challenges our customers are facing, and how we can be best positioned to assist in their ongoing transformation.

Innovation Without Disruption: Introducing SUSE Linux Enterprise 15 SP4 and Security

Monday, 20 June, 2022

[This blog post is contributed by Blaine Stone, Certification Compliance Program Manager at SUSE.]

At SUSE, we take security seriously. The major areas of our focus are:

  1. Secure the Foundations
  2. Secure the Product
  3. Secure the Supply Chain
  4. Confidential Computing

SUSE has been working on security for a long time now because we believe in resilience, reliability, and the ability to secure the foundations, our product, and the supply chain that your product relies on. To accomplish this, SUSE is working in four key areas:

  1. Secure the Foundations
    • DISA STIG – obtaining certification for system hardening
    • NIST FIPS 140-3 – acquiring validation of all cryptographic modules
    • Automated SCAP Profiles
    • PCI-DSS and HIPAA Hardening Profiles
    • Pre-hardened Images for the Cloud

Broad governmental certifications around the world provide assurances to our customers and partners that compliance and a secure software supply chain ultimately position SUSE Linux as a leader in this space.

  1. Secure the Product
    • Common Criteria EAL4+ certification – ensuring our product is functionally tested; structurally tested; methodically tested and checked; and methodically designed, tested and reviewed. SUSE is proud to be the only Linux producer with this certification.
    • US Federal Government NIAP Protection Profile
    • Spain OC-CCN Certification
    • Korea GS Certification
  1. Secure the Supply ChainSUSE has added SLSA Level 4 compliance to existing security certifications. SUSE Linux Enterprise (SLE) 15 SP4 is the first Linux distribution to deliver packages under the demanding Google SLSA standard distinctly adding a SLSA Level 4 Compliant Supply Chain that helps to protect against the increasing software security and supply chain threats customers face today. Our SLSA: Securing the Software Supply Chain (https://documentation.suse.com/sbp/server-linux/html/SBP-SLSA4/index.html document details how SUSE, as a long-time champion and expert of software supply chain security, prepared for SLSA Level 4 compliance. You may also access the SUSECON Digital 22 presentation ( https://susecon.com/) by Markus Noga, General Manager Linux Business Unit, where he talks with Google about this achievement.
  2. Confidential ComputingThere are a couple of layers to confidential computing:
    • Data at rest
    • Data in transit
    • Very new data in actual use

SUSE in our products has supported data-at-rest encryption for your SSDs, your volumes and your partitions for a long time. And we’ve also secured data in transit between machines or between data centers and networks with zero-trust network encryption for quite a while. What is new with SLE 15 is that you can also protect data that is actually in use in the main memory or in CPU registers that get dumped to the main memory when a context changes.

    • Confidential Virtual Machines are a game changer for data protection in the cloud. It involves data in use, and securing data that is actively being accessed by an application or a user and stored in memory. Our SUSE Linux Enterprise Server supports Confidential Virtual Machines on Google Cloud Platform, accelerating migrations to the cloud for on-premises and regulated workloads that require the utmost security & compliance. This helps protect against remote attacks, privilege escalations, and malicious insiders.
    • With shielded VMs that protect Compute Engine instances, you can securely use data that gets migrated to the cloud and safely process sensitive data while maintaining encryption in memory. This has NO exposure to the rest of the system and no change to workload or code.
    • SUSE and AMD have a long history of upstream collaboration across key AMD initiatives, including confidential computing. Starting with Secure Encrypted Virtualization in 2016, followed by Encrypted State SEV (SEV-ES) to Secure Nested Paging (SEV-SNP). As a result of this upstream collaboration, SUSE has an early mover advantage when it comes to SEV technologies making their way into our enterprise Linux distribution.
    • AMD and SUSE are working together to bring Confidential Computing into the Linux ecosystem. SUSE helped to add support for AMD SEV and SEV-ES to a wide range of products like the Linux kernel, LibVirt, and Kubevirt.
    • SUSE is blazing a new trail with confidential computing with AMD as a key partner of ours and their ultra secure AMD-SEV chipsets.

That is what customers of SUSE can experience today, and that is what the “innovation without disruption” translates to at SUSE.

To learn more, go to Business Critical Linux, SUSE Security, and/or SUSE Linux Enterprise Server.

Thanks for reading!

Innovation Without Disruption: Introducing SUSE Linux Enterprise 15 SP4 and Resilience

Monday, 20 June, 2022

[This blog post is contributed by Michal Svec, Product Manager at SUSE, and Jose Betancourt at SUSE.]

The SUSE Linux Enterprise 15 SP4 family of products brings along many improvements, feature enhancements, hardware enablement, performance improvements, security additions and bug fixes. 

 

Many improvements come from SUSE collaboration with hardware vendors, ranging from chipset manufacturers to OEMs. SLE 15 SP4 brings support for all the latest chipset releases and features, for instance Intel’s 12th Gen processors or AMD EPYC Gen 4 CPUs (including SEV ES support), and also many Arm architectural features are supported.  SUSE is also working on the full support for the recently announced NVIDIA Open GPU kernel drivers and vGPU support.  We would not be able to do that without close collaboration with our hardware partners like Dell Technologies, Fujitsu, HPE/Cray and IBM – making sure that their systems work well with SUSE Linux Enterprise products.

A lot of that enablement applies to the cloud environments.  Just to outline a few of the biggest ones, Amazon AWS Graviton 3 platforms are fully supported now, and there is also support for the Nitro Enclaves.  Microsoft Azure Arm64 server instances are now supported, with many of the improvements have been made in the SAP area.  We also now offer pre-hardened images, available both as BYOS (Bring Your Own Server) and PAYG (Pay as you Go) for supported cloud environments.  Overall, with SUSE Linux Enterprise 15 SP4 we deliver on the promise to keep the product family fully enabled on the recent hardware platforms to take advantage of all the improvements and new features.  We make sure the operating system and its components are future-proof so that the workloads can run in a seamless and effective way.

While we collaborate with our silicon design partners across a variety of products, technologies, and solutions, we have an open approach that our customers can then use when building their own technology stacks.

 

  • Intel:  Intel and SUSE have been working together since SUSE’s inception in 1992 when SUSE was the first company to market Linux for the enterprise. As Intel’s and SUSE’s product and technology portfolios have grown throughout the years, so have the breadth and depth of the relationship. Like the roots of a tree, Intel’s breadth of portfolio is often unseen. Yet its enablement and usage via the SUSE product portfolio is fundamental to the development of larger products and solutions built by our joint partners and available through the many routes to market (IHVs, ISVs, CSPs, and Embedded Solutions). Intel is one of the leading contributors to the Linux kernel as well as a significant contributor in the Kubernetes space.
  • AMD: AMD and SUSE have been collaborating in the upstream Linux community and around AMD-specific optimizations for more than 20 years.  AMD gave the world the first CPU to introduce the x86_64 ISA, and SUSE was an early provider of an enterprise Linux distribution for the then new architecture.Our most recent collaboration efforts are around two key areas: GCC Compiler and Toolchain optimizations for AMD and Secure Encrypted Virtualization (SEV and SEV-ES). AMD’s Secure Encrypted Virtualization (SEV) is a technology that protects KVM-based Linux virtual machines by transparently encrypting the memory of each VM with a unique key.  SEV is especially relevant to cloud computing environments, where VMs are hosted on remote servers which are not under the control of the VM owners.SUSE has been playing an important role with AMD since 2016 to bring Confidential Computing ‘upstream’ with collaborations in the areas of the Linux kernel, libvirt and KubeVirt to name a few.  SUSE customers will be the first ones to benefit from AMD SEV-ES host and guest modes, enabling customers to select additional security-strengthening VM isolation.
  • NVIDIA:  NVIDIA is along-standing SUSE partner around accelerated computing.  As NVIDIA expanded its business into data center-scale networking, artificial intelligence and machine learning, and edge computing, and SUSE expanded its reach into the cloud-native space so the breadth and depth of our collaboration has also grown.NVIDIA is optimizing accelerated compute across GPUs, CPUs, DPUs, complete systems and specialized software,  and SUSE aims to enable its availability and usage to our joint customers through our operating system offering (SUSE Linux Enterprise Server, SUSE Linux Enterprise Server for Arm, SUSE Linux Enterprise Micro, and SUSE Linux Enterprise Base Container Images) as well as our cloud-native product stack (SUSE Rancher, and RKE2/K3s Kubernetes engines).When it comes to NVIDIA, everyone agrees that their biggest open announcement this year is the release of NVIDIA Open-Source GPU Kernel modules. The availability of these modules is a big deal for SUSE and its customers. The ability for Linux distribution providers like SUSE to add the driver directly to its kernel is significant because this could not be accomplished before due to license incompatibility. It also enables SUSE to perform security reviews of the drivers and sign the drivers.  Last, but certainly not least: it allows for SUSE engineers to debug, integrate, and contribute back.
  • Arm:  Arm is a leading semiconductor intellectual property (IP) supplier.  It develops technology it licenses to other companies who design and manufacture their own products that implement the Arm architecture.  This includes system on a chip (SoC) as well as system on module (SOM) designs.  It also designs IP cores that implement the Arm instruction set architecture and licenses these designs to many companies that incorporate the designs into their own products.Because of its approach to the market, the collaboration is better defined as SUSE and the Arm ecosystem. SUSE’s Business-Critical Linux unit provides SUSE Linux Enterprise Server for Arm, SUSE Linux Enterprise Micro, as well as SUSE Linux Enterprise Base Container Images for Arm’s 64-bit Armv8-A architecture, enabling Arm ecosystem and partners to build products and solutions with a world-class, enterprise supported Linux distribution.  SUSE Linux Enterprise can be deployed today on Silicon from Broadcom (Raspberry Pi), Ampere Computing (Gigabyte Mount Snow), as well as cloud-based instances from AWS (Graviton) with Azure Virtual Machines availability coming soon.The partnership between Arm and SUSE is about providing partners and customers with open source-based infrastructure products and solutions for the Arm architecture.

The ongoing collaboration between SUSE and the Silicon Designers enables joint downstream partners to build enterprise-class solutions, leveraging Silicon-based features and capabilities, through an open-source OS and Cloud-native set of tools.    These foundational building blocks are available through partners (IHVs, ISVs, and CSPs to name a few) who in turn are delivering the solutions our joint customers need to run their business.

 

 

To learn more, go to Business Critical Linux and/or SUSE Linux Enterprise Server.

Thanks for reading!

Jeff Reser

Innovation without Disruption: Introducing SUSE Linux Enterprise 15 SP4 and Agility

Monday, 20 June, 2022

In a production environment, where applications must be flexible at deployment, running and rolling out times, it is important to consider agility as one of the main points to consider when building or evolving your platform.

SUSE Linux Enterprise Server is a modern, modular operating system for both multimodal and traditional IT. In this article, I’ll provide a high-level overview of features, capabilities and limitations of SUSE Linux Enterprise Server 15 SP4 and highlight important product updates.SUSE Linux Enterprise Server leverages your workloads to provide security, agility and resiliency to your ecosystem. In this article, I am going to cover agility. SUSE Linux Enterprise Server also now supports KubeVirt. 

Regarding agility, some relevant offerings from SUSE include:

  • Base Container Images (BCI): BCI brings all the SLES (SUSE Linux Enterprise Server) experience into container workloads. It builds your applications in a secure, multi-stage and performance environment.
  • Harvester HCI (HyperConverged Infrastructure) (KubeVirt): Harvester is a modern HCI solution that bridges the gap between the HCI software and the cloud-native ecosystem using technologies like Longhorn and KubeVirt to provide storage and virtualization capabilities.  It connects multiple interfaces to the Virtual Machines and provides isolation capabilities to the architecture. With Harvester and Kubernetes, you no longer need to manage traditional HCI infrastructure and cloud-native separately.
  • SUSE Manager HUB: Scale your infrastructure and manage thousands of servers through a hub implementation of SUSE Manager.

Why SLE BCI?

While Alpine is the most used base image, when it comes to an enterprise use case, you should consider more variables before making a choice. Here are some of the reasons why SLE BCI (which I will shorten to simply BCI for now) is potentially a great fit.

  • Maximum security: When it comes to developing applications, the world is moving and working in a cloud native ecosystem because of its emphasis on flexibility, agility and cost effectiveness. However, application security is often an afterthought in the initial stages of developing a new app. If developers do not choose their base image wisely, their application could be affected by security vulnerabilities, or it simply will not pass the required security certifications. When developing the SLE family of products, SUSE worked to ensure they meet the highest levels of security and compliance, including FIPS (Federal Information Processing Standard), EAL4+, FSTEC, USG, CIS (Center for Internet Security) and DISA/STIG. All this work flows downstream to SLE BCI, making it one of the industry’s most secure base images for enterprise developers or independent software vendors to leverage.
  • Available images: SUSE provides two sets of images through its registry, the base ones (bci-base, bci-minimal, bci-micro, bci-init) and the language-specific ones (Golang, rust, openJDK, python, ruby, and more).  Check out the registry!
  • Supportability: One of the key factors that made me give BCI a try is the supportability matrix. So far, if I must test my application locally or for a Proof of Concept, I could use an Alpine or a specific language/runtime image. But when it comes to creating an enterprise-grade application, sooner than later, I will need to migrate to a supported one. SUSE fully supports bci-base. Customers with an active subscription agreement can open support cases or request new features through the official channels.Something else that captured my attention: the supportability matrix of BCI has no bounds with the underlying host where the application is running, which allows more flexibility and mixed ecosystems while keeping your application covered by the SUSE support umbrella.

SUSE Manager hub

Ecosystems need to scale as required. Managing servers in a lab is not comparable to managing different production environments where not only is managing servers important, but so is complying with security standards and maintaining health and ensuring compliance.  When it comes to managing an environment, whether it is pure SUSE or a mixed environment, there are some aspects we need to take into consideration:

  • Compliance: through the templates and automation of new deployments, every new element or operating system would ensure that it is following the compliance definition for the ecosystem and the different environments defined.
  • Security: An agile environment requires new features to be tested and new discovered vulnerabilities to be patched. Your ecosystem is as vulnerable as the weakest element you have deployed. With a centralized path, configuration, and package management, you will be aware of the vulnerabilities affecting your entire ecosystem and design the update or deployment strategy.
  • Health: as part of day 2 operations, SUSE Manager centralizes the management of the risk of business disruptions and monitors downtime.
  • Scalability: with new elements coming to the environment, it is also important to manage the infrastructure in a supported, feasible and performant manner. SUSE provides scalability up to 1 million clients in a hub-based architecture. Multiple SUSE Managers can be managed from a single hub node, aggregating clients and attaching them to a specific proxy server that is also managed by its own manager.  This allows you to have a centralized reporting database that is helpful since you do not have to look on each server to get the monitoring of a specific environment or subset of clients. In other words, everything is managed from a centralized hub. This architecture adds some features for complex environments or specific management requirements for compliance.  For example, for multi-tenancy you can use different managers to isolate server configurations. Check out the SUSE Manager product page for more information.
  • Monitoring: Whether SUSE Manager is installed on a hub or standalone, each environment needs to be reported where you can see the relevant information you are looking for in a single glance. Ecosystems need to be agile and adaptable, deploying new servers, decommissioning the ones you no longer need and being aware of new elements added even from various sources. SUSE Manager can deploy multiple probes that you can configure to look after the most critical elements or the most relevant events for you.SUSE Manager uses Prometheus to monitor the elements and Grafana for the dashboards. You are not restricted to what comes with the product; instead, you can create customized dashboards to organize and show that information in a way that is more relevant. In a scenario where the monitoring comes from third-party software, SUSE Manager Monitoring can pull data from a single or multiple external sources and use it.No matter how you evolve your ecosystem, whether you do it through the deployment templates or use external deployers, SUSE Manager, through the Service Discovery features, can look for potential monitoring targets that add dynamic definitions on a living environment.

Trento

SAP environments are complex systems designed to accomplish complex challenges. They consist of several pieces including databases, high availability systems, applications servers and workloads. No matter where you deploy, on premise or in the cloud, all those pieces need to integrate with each other with their own setup processes and configurations. This implies that SAP environments are hard to deploy, configure and manage. Usually, the initial deployment and configuration of SAP requires enterprise admins and third-party integrators to reference SAP notes. It is a time- and resource-consuming task.

SAP setup process consists of several manual steps and configurations to deploy and maintain the software successfully. With so many elements to configure and handle, there are situations where misconfigurations and human errors lead to unexpected downtime.SUSE and SAP have been working together for the last 20 years to build up a stable integration between SAP and SUSE Linux Enterprise Server for SAP Applications, creating an in-depth operating system designed and certified for running SAP systems, databases and workloads.

Deploying and maintaining SAP environments is not a “fire and forget.” It requires maintenance and monitoring the status of the hosts, systems, databases and high availability pieces. To do that, you have to look for someone who can handle this as it is an extremely specific system. This is where Trento comes to the table. Trento is a containerized solution that provides a single console view to discover and manage all SAP systems components (databases, hosts, HA, databases and HANA Databases). Trento is the way to safeguard SAP ecosystems. The user will be notified when a bad configuration or a missing setup step is detected on any systems, recommendations on reducing time-consuming assets (like performing daily and manual revisions of the systems) or digging into the SAP documentation looking for a specific asset. Trento is the centralized piece of SAP infrastructure where the user can see the status of the ecosystem in a single dashboard, get recommendations on what is the best configuration for a specific environment and ensure the SAP ecosystem is deployed and running following best practices. Leverage SUSE’s expertise with SAP. Within SUSE Linux Enterprise Server for SAP Applications, Trento is a first-class citizen that can leverage how well the operating system and the SAP ecosystem work together.

Conclusion

SUSE provides a stack to manage your infrastructure components, with a focus on agility without renouncing stability or security. This stack includes SUSE Manager, BCI images, Trento, and Harvester.  SUSE can manage multi-vendor ecosystems where SYSE systems and other operating systems are managed, patched and analyzed.  SUSE solutions keep your entire environment in compliance with the highest security standards.To learn more, go to Business Critical Linux, SUSE Security, SUSE Linux Enterprise Base Container Images, SUSE Manager, and/or SUSE Linux Enterprise Server.

Thanks for reading!

 

Allan Gray accelerates DevOps strategy and cuts time to market with SUSE Rancher

Friday, 17 June, 2022
 “We now have the ability to click a button and double our scale. It now takes us a few minutes to load new applications whereas before it took at least a day. That’s 99.8% faster!” IT Delivery Team Lead, Allan Gray. 

 

Allan Gray, Africa’s largest privately-owned and independent investment management company, has long understood that in order to stay relevant in the digital age, it would need to deliver new and better digital services faster.  

The IT department previously ran a traditional server-based architecture. Feature teams would work on the development side, then send their output to the operations team for production. This worked well for many years until the monolithic nature of legacy processes slowed the company’s ability to innovate.  

Container technology promised to provide a consistent, common environment for project teams to collaborate, along with the kind of granularity needed to accelerate new service innovation.  

Allan Gray favored open source due to better features, reliability and flexibility than propriety solutions. An initial experiment with Docker was curtailed when the team suffered periods of costly downtime, which led them towards Kubernetes. The team’s primary consideration when selecting a container management solution was the ability to automate, whilst ensuring regulatory compliance. Ensuring security, reliability and scalability were also high priorities for Allan Gray.  

Based on this criteria SUSE Rancher became the obvious choice. 

Maximizing DevOps efficiencies with Kubernetes and SUSE Rancher 

SUSE Rancher immediately delivered granular security controls and provided a single pane of glass, through which an entire Kubernetes ecosystem could easily be viewed and managed.  

SUSE Rancher has also meant zero downtime while applying updates, faster time to market, and automated regulatory controls. System stability has improved, along with employee morale — no more all-nighters to make sure everything works. Teams are now able to iterate faster — up to 20 new deployments are launched each day, versus the previous cadence of once per month. 

Facilitating this new, microservices-centric architecture, SUSE Rancher has also driven the ability to scale at speed. It now takes just a few minutes to load new applications, whereas before it took at least a day. A 99.8% improvement. 

Teams developing business functionality are now less concerned with infrastructure which doesn’t move the business forward. They can now deploy client-servicing functionality with greater confidence and reliability. 

Because of SUSE Rancher’s ability to support high availability and roles-based access controls (RBAC) for thousands of clusters and nodes, teams can also deliver services faster, while remaining compliant. 

SUSE Support has also been instrumental in helping Allan Gray navigate its digital transformation journey. The IT team has called it out as the best support offered out of all of its third party vendors, due to fantastic turnaround times and staff having excellent technical knowledge. 

Testimony to the success of the implementation is the growing adoption of SUSE Rancher within Allan Gray, including the retail and latterly the institutional side of the business. 

The new infrastructure has also helped the company attract talent at a faster pace, no longer having to hire individuals with knowledge of legacy systems, which had become increasingly difficult. 

Allan Gray is planning further innovation, including moving its containerized Kubernetes environments to the cloud, beginning in 2022. SUSE’s commitment to open source philosophy will enable Allan Gray to select its hyperscaler of choice. 

Click here to find out more about how Allan Gray accelerates DevOps strategy and cuts time to market with SUSE Rancher.