How to Protect Against Kubernetes Attacks with the SUSE Container Security Platform

Tuesday, 21 May, 2024

What are common challenges with Kubernetes and container security?

The rise of containers and Kubernetes has transformed application development, allowing businesses to deploy modern apps with unprecedented speed and agility. However, this shift to containerized infrastructure presents a new security landscape with challenges. Unlike traditional applications, containers have expanded attack surface – more potential entry points for malicious actors who are keenly aware of this paradigm shift. To effectively address these evolving threats, security teams require a comprehensive solution that delivers deep visibility within runtime environments and ensures supply chain security for containerized environments with security best practices.  Additionally, it’s crucial to equip development teams with the tools to secure the software supply chain from the very beginning. 

Many enterprises will find it difficult to build cloud native security expertise in-house. The sheer volume of threats can be overwhelming for in-house security teams. Traditional security professionals may lack the specialized knowledge required to secure these dynamic environments. This lack of qualified personnel can leave organizations vulnerable to sophisticated attacks targeting containerized applications as they struggle to find or train security staff with the necessary expertise. Most organizations need not only a solution but also an expert team that can ensure proper configuration and alignment to the organization’s risk profile to minimize the risk of a breach.

What is NeuVector Prime?

SUSE’s container security platform, NeuVector Prime, is the industry’s only 100% open source, zero trust platform designed for full lifecycle container security. NeuVector Prime leverages its Kubernetes-native architecture to provide unparalleled visibility and control at the cluster layer so attacks can be detected and blocked. NeuVector Prime offers what other container security solutions don’t, such as Deep Packet Inspection (DPI), Layer 7 Firewall Protection, Zero Trust Security, Automated security policies and Data loss prevention (DLP).

Image 1: NeuVector Prime Dashboard

 

What are the benefits of NeuVector Prime by SUSE? 

Kubernetes-Native:

  • NeuVector Prime secures any Kubernetes environment whether using Rancher, Red Hat OpenShift, Tanzu Kuberntes, Amazon EKS, Microsoft AKS, IKS, GKE and regardless of if the environment is air gapped, on-prem.
  • Because of the seamless integration with Kubernetes platforms this enables organizations to have a secure and simple deployment of a secure cloud native stack. 
  • Because NeuVector Prime sits inside of the cluster, it is isolated from the Cloud Service Provider (CSP). This allows the environment to instantly improve the Kubernetes security posture without changing the compliance status of the environment. (i.e. if the environment was FedRAMP compliant before NeuVector Prime, it will remain FedRAMP compliant.)

Infrastructure-as-Code and Security-as-Code capabilities:

  • With many organizations adopting hybrid-cloud and multi-cloud environments, NeuVector Prime allows them to move easily from one cloud provider to another without compromising security. These workloads are easier to secure without being dependent on their cloud provider. 
  • Security policies, access controls, and vulnerability scans are defined within code files, ensuring consistent and automated security implementation across the infrastructure.  This aligns security practices with the DevOps workflow, fostering a more collaborative “DevSecOps” approach.
  • Automated deployment and enforcement of security policies based on the defined code allow for consistent and repeatable security implementation across container environments.

Extensive visibility for production environments while still securing in-build:

  • The NeuVector Prime network protection (firewall) operates at layer 7, where most other container security offerings operate at layer 3 and 4. Layer 7 visibility enables organizations to protect containers against attacks from internal and external networks, including real time identification and blocking of the network, packet, zero-day, and applications attacks like DDoS and DNS. Additionally, this visibility allows organizations to create specific policies around the network that are not possible with other container security solutions. 
  • Ensure supply-chain security through vulnerability management and image scanning without impacting innovation. DevOps teams can deploy new apps with integrated security policies to ensure they are secured throughout the CI/CD pipeline and into production. 

 

What is the difference between NeuVector and NeuVector Prime?

NeuVector is the name of the open source project. NeuVector Prime is the commercial offering from SUSE used by organizations to secure their Kubernetes environments. It allows customers to get enterprise-level support and add consulting and services from SUSE.

With sensitive data or business-critical workloads, your organization can’t afford to use an unsupported security solution, especially when and if an exploit occurs. NeuVector Prime offers more than support break/fix services. SUSE’s support team can help properly deploy NeuVector and configure the extensive security layers so attackers can be detected and blocked. NeuVector Prime subscribers get access to Advanced Sizing and Planning guides, a Common Vulnerabilities and Exposures (CVE) database lookup service, and access to security templates and rules before they are published to the community. In addition, NeuVector Prime seamlessly integrates with Rancher Prime (and other Kubernetes management platforms) through an integrated UI extension, role-based access control (RBAC) mapping, and other features that enable simple deployment of a ‘secure cloud native stack.’

Specifically, SUSE’s NeuVector Prime customers get:

  1. SLA backed, Product Support Services, RCA and troubleshooting to make sure NeuVector is configured properly for maximum protection
  2. Vulnerability (CVE) investigation triage assistance to assist in vulnerability management and remediation efforts
  3. Best practices, hardening assistance (e.g., segmentation, network and process profiling, admission controls) to assist with all layers of security
  4. Run-time threat rules configuration optimization. Access to assets and services (e.g., performance tuning, CVE lookups) for deployment planning and scalability
  5. Built-in, supported native integration with Rancher Manager and Rancher Distributions (e.g., UI extension) to easily deploy and manage a secure cloud native stack

 

Learn More

To learn more about SUSE’s container security platform, register for the upcoming webinar on all things NeuVector Prime.

Celebrating success: SUSE features in CRN’s Women of the Channel List

Wednesday, 15 May, 2024

The CRN Women of the Channel list recognizes women who demonstrate exceptional channel leadership, strategic vision, and advocacy that impacts business growth and innovation in the IT channel.  This annual list serves as a guide showcasing top leaders who are dedicated to continuous advancement of the channel.  This year the list recognizes four outstanding women from SUSE, who have shown unique strengths, vision, and achievements of IT channel, and who are trailblazers for future generations.

Get to know the women who work within our Ecosystem team here at SUSE:

Rachel Cassidy, SVP Ecosystem, SUSE

How have you personally helped advance your company’s channel business over the past year?

I drove the value of the ecosystem across stakeholders now reflected with a Partner First strategy. Streamlining go to market alignment, removing friction from channel conflict through improved comp plans, greater alignment, enablement of partners, and focus on co-sell and collaboration.

I created an evolved global ecosystem organization removing silos serving the ecosystem holistically. Next evolution of SUSE One Partner Program with refreshed specializations, incentives, tooling, and enablement. This included simplifying partner buying programs enabling partners to transact more seamlessly and enhanced partner incentives to reward partners aligned with impact.

What is your superpower and how has it helped you build your career?

My intuition and ability to understand and read people has always served me well. I look to understand how an individual communicates, reacts, and learns, to improve effective communication and overall alignment. Helping others to understand each other through their communication, approach, body language to facilitate collaboration is key to building and driving change in any organization with advocacy and support. I also listen to my gut, my instinct. If something feels wrong, it usually is and this drives me to understand and learn more so I can make the best and well informed decision balancing head and heart.

Christine Puccio, Head of Cloud, SUSE

What are your goals for your company’s channel business over the next year?

  • Support the next level of exponential growth leveraging the power of marketplace and to create program offerings that support SUSE’s current partner model with cloud specific offerings.
  • Support our customer’s choices by providing more buying options through marketplace, providing less friction for our customers to utilize SUSE software through marketplace.
  • Enable sales teams to develop and deepen cloud co-sell motions.

What’s one thing most people don’t know about you?

I am passionate about learning and pushing boundaries, I thrive on overcoming challenges through taking risks. A prime example is my spontaneous entry into the challenging Escape from Alcatraz Triathlon, despite no prior triathlon training. Selected through a draw, I committed to rigorous preparation, fueled by a determination to finish. This bold approach reflects my confidence in tackling the unknown, embodying my determination for growth and embracing opportunities beyond my comfort zone.

Jill Schiller, Partner Executive

Provide a brief synopsis of your key channel-related accomplishments over the last year.  Answer this with a view toward your personal channel accomplishments as opposed to your company’s.

I was able to leverage my network to assist our internal Sales Force in building their network and relationships with Partners and Distribution. This has been key to help grow SUSE’s brand within the Channel Ecosystem.

What is your superpower and how has it helped you build your career?

My superpower is building relationships and networking. A teammate calls me JillGPT because even if I do not know the answer to a question I know the person who does. I believe that building your network is key to advancing your career.

Allison Greis, Senior Channel Marketing Manager, NA

What are your goals for your company’s channel business over the next year?

  • Achieve tighter alignment with NA Field Marketing peers on my team supporting a new regional sales model that has a partner first approach.
  • Expand partner demand generation activities with customers in regions.
  • Drive deeper penetration into focused partners that are national in scope while pairing up with SUSE sales representatives for engagement.

What is your superpower and how has it helped you build your career?

I’ve been referred to as Eagle Eyes. With an extreme and thoughtful focus on details, I pay attention to the little things and make sure plans are thorough to ensure success. That has helped avoid fire drills which are stressful for everyone involved! This has contributed to being a trusted team member who can be relied on to get things done, take on new tasks and deliver quality results.

 

As I reflect on the incredible journey of these four remarkable women from SUSE, I am inspired by their dedication to our partner ecosystem, and the tireless way they work to make our partnerships successful.  I hope you feel like you have gotten to know these women a bit more – but full coverage can be found in CRN’s 2024 Women of the Channel list.

Navigating Linux Patch Management: Best Practices for Keeping Your System Updated

Thursday, 9 May, 2024

Professional working on a laptop with a futuristic, transparent update interface overlaid on the screen, displaying code and gear icons, emphasizing software or system updating in a high-tech environment.

 

In the rapidly evolving landscape of cybersecurity and system management, staying ahead of threats and ensuring optimal performance are paramount. This is especially true for Linux systems, which are widely regarded for their robustness and flexibility in business environments. Patch management, a critical component of system maintenance, plays a pivotal role in this dynamic. It involves the systematic notification, identification, deployment, and verification of updates for software and systems. Effective Linux patch management tools and practices are essential for mitigating vulnerabilities, enhancing functionality, and securing against potential breaches.

SUSE, a leading force in the open source community, recognizes the importance of this ongoing process. Our dedication to fostering secure, efficient, and highly available Linux systems is evident in our comprehensive suite of Linux patch management software and solutions. By prioritizing the development and implementation of cutting-edge technologies, SUSE ensures that businesses can leverage the full power of Linux, safeguarding their operations against the latest security threats while maintaining peak performance. This commitment underscores the significance of adept patch management in Linux environments, showcasing SUSE’s role in shaping a more secure and efficient digital landscape.

Understanding Linux Patch Management

Definition and Importance

Linux patch management is the process of managing updates for the software components and operating system kernels within Linux environments. This critical maintenance task involves the identification, installation, and verification of patches—small pieces of software designed to fix bugs, close security vulnerabilities, and enhance performance. The essence of patch management in Linux is not just about keeping software up to date but ensuring that systems remain secure against emerging threats and continue to operate efficiently. In the context of ever-evolving cyber threats, the importance of an effective patch management strategy cannot be overstated. It stands as the first line of defense in safeguarding sensitive data and maintaining system integrity.

Overview of the Patch Management Process

The Linux patch management process can be broken down into several key steps, each vital for the overall health and security of the system:

  • Identification: This initial step involves monitoring for new patches and determining which are relevant to your system. This is crucial for keeping abreast of the latest security updates and functional improvements.
  • Assessment: Once a patch is identified, its implications on system performance and security must be evaluated. This involves analyzing the patch’s contents and determining the urgency of its application.
  • Deployment: This phase entails the actual installation of the patch onto the system. Deployment strategies may vary depending on the patch’s criticality and the system’s architecture.
  • Verification and Testing: After deployment, it’s essential to verify that the patch has been correctly applied and to test the system for any unforeseen impacts. This ensures that the patch does not adversely affect system functionality.
  • Documentation and Reporting: Keeping a record of installed patches, along with any issues encountered during the process, aids in future patch management efforts and compliance audits.

The Role of Patch Management in Linux Security

Patch management plays a critical role in fortifying Linux systems against a spectrum of security threats. By ensuring that systems are promptly updated with the latest patches, organizations can protect themselves from vulnerabilities that cybercriminals exploit. This ongoing process is fundamental to maintaining the integrity, confidentiality, and availability of data and services.

Regular patching addresses several common vulnerabilities, including those related to software bugs, system misconfigurations, and outdated protocols. For instance, patches often rectify issues that could lead to unauthorized access, data leakage, denial of service (DoS) attacks, and malware infections. Moreover, they can enhance the system’s resilience by introducing improved security features and algorithms.

The proactive application of patches is, therefore not just about fixing known problems but is a strategic approach to preemptively secure systems against potential exploits.

Setting Up a Patch Management Strategy

Effective patch management is foundational to maintaining secure and reliable IT infrastructures. This section outlines the steps to establish a robust patch management strategy for Linux environments, ensuring your systems are resilient against vulnerabilities while supporting your business objectives.

Assessing Your Linux Environment

Begin by conducting a thorough inventory of your Linux environment. Identify critical systems, applications, and data that are essential for your business operations. Understanding the architecture and dependencies within your infrastructure is crucial for determining the impact of potential vulnerabilities.

Prioritizing Patches Based on System Vulnerability and Business Impact: Not all patches are created equal. Assess the criticality of each patch based on the severity of the vulnerabilities it addresses and the potential business impact. Prioritize patches for critical systems and vulnerabilities that pose a high risk to your operations, ensuring they are applied swiftly to minimize exposure to threats.

Choosing the Right Tools for Patch Management

A range of tools and solutions are available to streamline the patch management process. SUSE Manager is a comprehensive solution designed specifically for Linux environments. It automates the process of patching, configuration, and subscription management across your infrastructure, ensuring systems are always up-to-date and compliant.

Developing a Patch Management Policy

Establishing a formal policy is critical for defining the procedures and responsibilities related to patch management. It sets the foundation for a systematic approach to maintaining system security and performance.

Key Components of an Effective Patch Management Policy: An effective policy should outline:

  • Roles and Responsibilities: Define who is responsible for monitoring vulnerabilities, applying patches, and verifying their success.
  • Patch Management Procedures: Detail the process for assessing, prioritizing, deploying, and documenting patches.
  • Compliance and Reporting: Establish criteria for compliance with internal and external regulations, along with reporting mechanisms for oversight and audit purposes.
  • Review and Update Cycle: Include provisions for regularly reviewing and updating the patch management policy to adapt to new threats and business needs.

By thoughtfully assessing your Linux environment, selecting the appropriate patch management tools, and developing a comprehensive policy, you can ensure your systems are secure, compliant, and aligned with your business objectives. 

Best Practices for Linux Patch Management

Effective patch management is a cornerstone of IT security and operational integrity. Adopting best practices in patch management ensures that Linux systems remain secure, functional, and compliant. Below are key strategies to enhance your Linux patch management process.

Regularly Schedule Patch Reviews and Updates

Consistency in patch management is vital. Regular patch cycles ensure that systems are updated promptly, reducing the window of vulnerability. Establishing a routine schedule for reviewing and applying patches helps in maintaining security and performance without lag.

Schedule patch updates during off-peak hours to minimize impact on business operations. Use automated tools to apply patches to a small number of servers initially, scaling up based on success and system dependencies. Communication with stakeholders is crucial to ensure they are prepared for potential downtime or changes in system performance.

Automate the Patch Management Process

Automation reduces the risk of human error, ensures consistency, and significantly decreases the time and effort required to apply patches across numerous systems. Automated tools can also prioritize patches based on severity and dependencies, streamlining the process.

SUSE Manager is designed to automate the patch management process, from identifying and prioritizing updates to deployment and verification. It offers a centralized platform for managing patches across a diverse Linux environment, simplifying compliance and reducing the complexity of managing multi-vendor systems.

Testing Patches Before Deployment

Before deployment, it is crucial to test patches in an environment that mirrors production. This step identifies potential issues that could affect system stability or compatibility, allowing for adjustments before widespread rollout.

Create a standardized testing procedure that includes performance benchmarks, compatibility checks, and rollback plans. Engage stakeholders in testing to ensure that functional requirements are met.

Monitoring and Reporting

Utilize tools that offer real-time monitoring and alerts for new vulnerabilities and patch releases. This proactive approach helps in quickly addressing critical vulnerabilities.

Comprehensive reporting is essential for tracking patch management activities and demonstrating compliance with regulatory requirements. Reports should detail the patches applied, systems affected, testing outcomes, and any issues encountered during deployment.

Overcoming Common Linux Patch Management Challenges

Linux patch management can present several challenges, from ensuring compatibility to managing downtime and dealing with resource constraints. However, with strategic approaches, these obstacles can be effectively navigated.

Patch Compatibility: Compatibility issues can arise, potentially affecting system stability. To mitigate this, thoroughly test patches in a staging environment that mirrors your production setup. This allows for identifying and resolving conflicts before deployment.

Downtime Management: Minimizing downtime is crucial, especially for critical systems. Schedule patch deployments during off-peak hours and consider using live patching technologies where possible, which allow for updating systems without needing to reboot.

Resource Constraints: Limited resources can hinder the patch management process. Automating routine patching tasks with tools like SUSE Manager can significantly reduce the manual effort required, allowing IT staff to focus on more strategic tasks. Additionally, prioritize patches based on severity and impact to ensure that critical updates are deployed promptly, optimizing the use of available resources.

By addressing these challenges through careful planning, testing, and the use of automation, organizations can enhance their Linux patch management processes, ensuring systems remain secure, stable, and efficient.

Final Thoughts on Linux Patch Management

In conclusion, maintaining a robust, strategic approach to Linux patch management is essential for ensuring system security, performance, and reliability. From regular scheduling and automation to testing and overcoming common challenges, the strategies discussed offer a roadmap to fortify Linux environments against vulnerabilities. SUSE’s commitment to providing comprehensive Linux patch management tools, such as SUSE Manager, supports businesses in adopting these best practices effectively. By embracing a proactive stance on patch management, organizations can safeguard their systems against emerging threats and maintain operational excellence.

Frequently Asked Questions (FAQs)

What is Linux Patch Management?

Linux patch management refers to the process of managing the installation of updates (patches) for Linux operating system components and software applications. These updates address security vulnerabilities, fix bugs, and provide performance enhancements.

Why is Patch Management Important?

Patch management is crucial for security, stability, and compliance. Regularly applying patches closes vulnerabilities, preventing potential exploits by cyber attackers, and ensures that systems run efficiently and reliably.

How Often Should I Apply Patches?

The frequency of patch application can vary depending on the criticality of the patch and the environment. Security patches should be applied as soon as possible, while others might be scheduled during regular maintenance windows. It’s recommended to establish a patch cycle that fits your organization’s needs and risk profile.

Can Patch Management be Automated?

Yes, patch management can and should be automated to ensure timely updates, reduce manual errors, and free up IT resources for other critical tasks. Tools like SUSE Manager provide automation capabilities for managing patches across a Linux environment.

What Should I Do If a Patch Causes Issues?

Before deploying patches, it’s essential to test them in a non-production environment. If an issue arises after deployment, having a rollback plan is crucial. Most patch management tools provide mechanisms to revert changes if necessary.

How Can I Prioritize Which Patches to Apply First?

Patches should be prioritized based on the severity of the vulnerabilities they address and the criticality of the systems affected. Security patches that fix high-risk vulnerabilities on critical systems should be at the top of the list.

SUSE & Youmoni – a complete IoT stack for public and private IoT use cases

Wednesday, 8 May, 2024

Technical whitepaper authored by:

Rhys Oxenham, Senior Director of Field PM & Engineering, SUSE Edge

Johan Edgren, Founder and CEO at Youmoni

Access the PDF here.

 

 

Introduction

Youmoni’s cutting edge, end-to-end IoT platform, is now integrated with the SUSE Edge stack, including SUSE Linux Enterprise Micro (SLE Micro) for edge-computing and sensor integration. Youmoni and SUSE are executing on a common roadmap which includes tasks to migrate existing containerized infrastructure over to Kubernetes for next-generation edge devices (based on K3S, or Podman for the smallest devices), IoT management in the cloud (based on RKE2), and Rancher Prime for end-to-end management of the entire fleet. Youmoni has developed a comprehensive and flexible enterprise IoT platform alongside SUSE’s secure and scalable edge infrastructure. The end-to-end offering is complete for any enterprise or partner that wants to digitize its assets using IoT and to transfer public cloud IoT solutions to its own private cloud and/or on-premises infrastructure.

 

Youmoni IoT Platform

The Youmoni IoT Platform includes all necessary components to jump start an IoT project and/or solution. A project can typically be up and running in days, rather than months, eliminating technical debt related to infrastructure and user interface coding. It includes edge computing, a gateway application layer, and a ready-to-use backend with micro services delivering the IoT platform. For end users, it also includes ready to use dashboards and apps with modular user interfaces developed using React and React Native technology.  The Youmoni suite provides service apps for maintenance, logistics for drivers, and consumer apps for connected products.

 

SUSE and Youmoni IoT Stack

The SUSE and Youmoni Stack for IoT is a unique combination for any organization or enterprise that wants to deploy, run, scale, and maintain a single, or multi-tenant, IoT platform for public or private IoT use cases. SUSE’s comprehensive, highly scalable, and secure infrastructure enables any IoT deployment and supports the end-to-end lifecycle of an edge-computing implementation, right from initial onboarding through to patching, monitoring, and lifecycle management.

 

From public cloud to on-premises infrastructure

The unique combination of SUSE infrastructure and Youmoni IoT Stack offer an end-to-end IoT solution which runs as either a public cloud infrastructure or an on-premises infrastructure, or a combination of both, making the solution incredibly dynamic and flexible for a wide variety of use-cases and requirements.

 

SUSE product support and future roadmap

SUSE and Youmoni have developed a common road map for the integration of more products and concepts. The primary focus for the next phase is SUSE’s Rancher Prime integration for deployment and monitoring of the complete stack including edge computing. We are also further tightening the integration of the containerized components of the Youmoni solution towards the utilization of SUSE’s Kubernetes offerings, predominantly K3S on the edge devices, providing support for edge container monitoring, security, and updates.

 

Security and scalability

Security and scalability are top priorities for both SUSE and Youmoni. The SUSE Linux Enterprise infrastructure, enabling a hardened, immutable, and highly secure operating system, and the containerized Youmoni IoT back-end services together ensure that the platform not only delivers confidence to customers, but also scales according to each customer’s demand. The architecture is based upon microservices that scale and multiply as needed in each container (pod). The whole platform can be managed by the SUSE Rancher platform (currently in progress). Edge computing, sensor data and API requests are filtered in layered and software defined networks, proxies’ mapped ports to services, and all requests need to use verified tokens (JWTs) to access the REST API’s. Security has been taken to another level by also introducing mandatory intra-service tokens in the Youmoni platform and persistent metadata describing IoT assets can be encrypted using the built-in Youmoni Crypto Service. Sensor data stored in separate time series databases are anonymized and only use device IDs and key/asset IDs as metadata.

 

IoT End to End

Connect: Connect your assets by adding sensors or by using existing data interfaces. An IoT gateway sends the data to the Youmoni IoT cloud where it is stored, visualized, processed, and analyzed. 

Visualize: To maximize your insights, we visualize your data to make it easy to understand. The Console, which can easily be branded with a company design. The Console presents the connected assets, location, sensor status and notifications etc. 

Asset focus: The Youmoni IoT Platform focuses on business objects, things, and assets rather than technical IoT devices. The Console presents everything in business context, not in device context. The asset model with dynamic and separated storage of customer’s products and or business objects metadata makes data exchange and business integration easy. 

 

Youmoni Services and Business Solutions

Several business solutions for different industrial verticals are available on the platform. Ready-to-use applications such as tracking of assets, vehicles, remote monitoring of machines and products, smart property solutions, and automated retail including mobile payment support are all available. Standard business solutions are defined as specific edge computing sensors and business logic, combined with data transformation rules, a suitable console (dashboard), application UI controls, sensor transformation, notification rules, and user roles. Specific and adapted business logic can be added using custom containers, and the Youmoni Machine Learning service utilizes deep learning for intelligent predictive maintenance and anomaly detection.

 

The Youmoni Platform Tech Stack

The Youmoni IoT Platform consists of four main layers. The fundament are the Platform Services and the Business Application layers, which can implement different applications for various business verticals. The application layer is easily customized with customer configuration, adapted user interfaces and branding. Each tenant and application can be configured with its own user, role, access model, IoT device, and sensor configuration, as well as including business logic events for data exchange and notifications.

The stack is a multi-tenant platform developed for maximum flexibility and scalability. Backend services use event sourcing and are written in Scala. Adaptations and configurations are mainly implemented as JSON configurations, JavaScript, and/or in the system admin dashboard provided in the customer’s user interface (the console). The presentation frameworks use modern React/React Native technology for native apps. Authentication is implemented using OAuth, and JWT’s are used in REST APIs and User Interfaces to ensure user/service session integrity across, and between, all services in the platform. 

 

Devices, interoperability, and integration 

The platform already supports Youmoni selected and integrated IoT hardware, which includes a wide variety of out of the box sensors and peripherals, but can also easily integrate other third-party devices and services that use public IoT standards such as MQTT.  All services expose REST APIs and integration with business systems such as ERP, SCADA, MDM, and PIM systems are easily made. SUSE and Youmoni are also investigating tighter integration with SUSE’s upcoming inclusion of Project Akri, which provides a standardized mechanism for the discovery and utilization of IoT devices within Kubernetes.

 

Machine Learning

The Youmoni stack also includes a framework for implementing machine learning using TensorFlow for deep learning. Youmoni is working on training models for automated and predictive monitoring using the autoencoder. The models can receive data from Youmoni integrated sensors and learn patterns in industrial machines e.g., engines, compressors, and/or behaviors in a smart property. The framework can predict normal behaviors and patterns and warn/notify if anomalies are detected, making proactive service and/or corrections a reality. 

 

SUSE IoT Console and Field Service App

 

Verified edge computing hardware

At the edge, the Youmoni IoT Application stack (based on Kura/Java/Scala), today running in Podman containers, is verified with SLE Micro 5.4+ on standard Raspberry Pi and industrial RPi hardware, e.g. Kontron or Techbase. However, it’s important to note that any hardware platform that’s capable of running SLE Micro (an incredibly extensive list given the comprehensive ecosystem that SUSE shares with its hardware partners) can easily be adapted to the Youmoni IoT platform, on both x86_64 and aarch64 hardware, if I/O drivers are available and mapped into the container infrastructure.

 

Youmoni Hardware

Youmoni also designs and manufactures IoT hardware with focus on multipurpose sensors and gateway boards. Youmoni Sense features sensors such as radar, temperature, humidity, acceleration, microphone, and external I/O for analogue sensors. RS232 and/or CAN bus can be used in numerous control and monitoring use cases. Youmoni Data Plug is a data and sensor integration card for multiple protocols and communication standards. Both devices support Wi-Fi, Bluetooth and 5G using an add-on card or external modem. The hardware devices can be customized for the customers’ use cases and optimized for cost, size, standards, mounting or other specific needs. The Youmoni embedded platform is a middleware platform written in C/C++ running on FreeRTOS.

 

Kontron Edge with SLE Micro and Youmoni Sense

 

 

 

 

 

 

 

 

 

 

Contacts and more information at:

Youmoni
https://youmoni.com

https://youmoni.com/contact

 

SUSE

https://www.suse.com/solutions/edge-computing/

https://www.suse.com/contact/

SUSE Revolutionizes Enterprise Kubernetes Management with Rancher Prime: See it in Action

Tuesday, 30 April, 2024

For the past 30 years, SUSE has been a beacon of innovation and reliability for the largest organizations around the globe. SUSE announced another leap in cloud native technology with Rancher Prime at Kubecon EU in Paris. The new Rancher Prime further simplifies container management for all skill levels and empowers enterprises to excel in their cloud native journey while avoiding common traps.

 

All skill levels tap into the power of Kubernetes through self-service

For Kubernetes, the de facto container orchestrator’s greatest strength is possibly its biggest weakness: intricate configurations and a steep learning curve. Enterprises crave a solution that demystifies this complexity and makes Kubernetes accessible to all skill levels without compromising on functionality. 

Rancher Prime was born from this need. It is a platform designed to make Kubernetes management intuitive and inclusive. Rancher Prime caters to a diverse audience, from the less technical staff who appreciate its user-friendly interface to the seasoned developers who love the terminal and platform engineers who seek granular control over their environments.

Empowering through simplification

One of the most lauded features of Rancher Prime is its authentication system, which streamlines the provisioning and management of Kubernetes clusters. This simplification extends to the developers’ realm, where they can independently deploy applications and manage resources while having the transparency of a terminal view for real-time adjustments at every stage.

Check out the new Rancher Prime!

Experience the exciting new Rancher Prime in less than 5 minutes with Erin Quill, SUSE’s Principle Technical Marketing Manager.

 

 

The benefits of Rancher Prime

Rancher Prime is more than just a Kubernetes management platform; it’s a catalyst for business transformation. Here’s how Rancher Prime can help your team:

  • Accessibility for All: From novices to experts, Rancher Prime offers something for everyone. 
  • No Vendor Lock-in: Rancher Prime is 100% open source and serves as an abstraction layer to deploy wherever you need to.
  • Prime Support: Access expert assistance, documentation and training.
  • Streamlined Operations: Simplify the creation and management of Kubernetes clusters with an intuitive interface.
  • Developer Empowerment: Self-service capabilities and terminal transparency accelerate the development cycle with the Application Collection.
  • Enhanced Security: Robust authentication and access control (RBAC) ensures a secure environment.
  • Enterprise-Grade Lifecycle: Enjoy production-grade quality, security updates, and critical patches for up to 18 months throughout the lifecycle.
  • Zero-Trust Security and Secure Supply Chain: Ensure compliance and security for mission-critical workloads and platforms.
  • Prime Add-ons: Customize your platform even more with additional enterprise content and features and vetted open source software.
  • Prime Priority: Collaborate to shape the future of Rancher Prime with early access to new features.

Want to learn more about Rancher Prime?

This blog discussed how Rancher Prime focuses on simplification and inclusivity in the world of cloud native management, providing the best developer experience. By bridging the gap between complexity and usability, Rancher Prime empowers enterprises to unlock their full potential and navigate the fast-paced realm of cloud native applications with confidence and ease. Talk to an enterprise solution expert to accelerate your innovation journey and expand your knowledge of Rancher Prime.

Exploring SUSE Edge 3.0: What’s new?

Friday, 26 April, 2024

SUSE Edge 3.0

 

The management of multiple Kubernetes clusters operating on diverse hardware in various locations presents significant obstacles. Edge computing is also complicated by factors such as limited connectivity, limited compute resources, energy consumption, and on-site expertise. SUSE Edge 3.0 is designed to enhance your Edge Computing strategy by providing better automation, offering high availability solutions, improved observability, and robust security. This article will delve into what’s new in SUSE Edge 3.0 and provide a comprehensive technical overview.

Open Source Foundations: Innovating with SUSE Edge 3.0

Open source is at the heart of everything we do at SUSE. It has been the foundation of major IT innovations over the past two decades and is essential for avoiding vendor lock-in, offering flexibility to adapt to business needs. With SUSE Edge 3.0, we extend Open Source Software and Edge Computing to be enterprise-ready, consumable, easy to implement and manage at scale.

Detailed Examination of SUSE Edge 3.0 stack

SUSE Edge 3.0

SUSE Edge 3.0 Stack

SUSE Edge 3.0 builds on a trusted stack of technologies like SLE Micro, Rancher Prime, and Kubernetes. For a better understanding of how SUSE Edge 3.0 works and its capabilities, let’s delve in the different components of the solution.

Rancher Prime offers a single pane of glass for the management of Kubernetes, including all the tooling necessary to manage multiple clusters at scale, including observability, logging, service Mesh, an application catalog and Fleet as a GitOps engine, enabling management as code not only for applications but also for clusters.

SUSE Manager and Elemental. This combination provides full lifecycle OS management to simplify administration tasks at the Edge and in the Cloud through. It also provides node-onboarding for Rancher, when a new node starts and registers against the endpoint running on Rancher and the node becomes available to deploy Kubernetes, facilitating remote onboarding and management of custom nodes.

Longhorn is a Cloud Native storage solution offering great performance in resource-constrained environments and flexible storage solutions for Edge environments.

NeuVector is a world-class container security solution that offers advanced solutions such as Supply Chain Security, Container Segmentation and layer 7 observability and scanning. NeuVector discovers normal connections and application container behavior and automatically builds a security policy to protect container-based services. NeuVector correlates application, network, process, and file access layers to assure you have the multi-vector accuracy needed for zero-trust.  

RKE2 and K3s, are  Kubernetes distributions based on Containerd runtime, lightweight and deployed using a single binary. K3s is designed for IoT environments and smaller devices like Raspberry Pi boards, offering a simplified entry point to Kubernetes. In contrast, RKE2 prioritizes security and aligns more closely with upstream Kubernetes. RKE2 comes with default configurations that meet the CIS Kubernetes Benchmark requirements and supports FIPS 140-2 compliance, offering a robust platform for organizations with stringent security needs. For these reasons, SUSE recommends RKE2 for enterprise customers and highly regulated industries.

SLE Micro is the foundation of our Edge solution. Based on SUSE Linux Enterprise technology, it is immutable and transactional, making it perfect for deployments at the Edge. It is secure, offers a smaller attack surface and is compliant with multiple security standards, such as FIPS 140-2, DISA SRG/STIG and CIS.

The combination of these technologies offers a solid foundation to manage Kubernetes, applications and Linux running on different hardware or IaaS platforms at scale. SUSE Edge 3.0 is a horizontal platform that provides all the different tools needed to cover the multiple use cases and situations that the different verticals or businesses will find at the Edge, from IoT devices to small regional data centers. Furthermore, SUSE Edge 3.0 has a low footprint making it energy-efficient  and a good match for both business and environment.

Features and enhancements in SUSE Edge 3.0

All new features and components in SUSE Edge 3.0 aim to improve management and security, either by optimizing the stack or providing validated designs for Edge deployments. They also aim to improve HA capabilities or extend Kubernetes’ value at the Edge.

  • Edge Image Builder: EIB automates the creation of complete OS images for edge deployments, streamlining the setup from the operating system to Kubernetes using just a configuration file. With automated, code-based deployment strategies, including GitOps, you can improve your edge security and reduce misconfigurations.
  • Rancher CAPI Integration and Metal3 Infrastructure Provider: Provides automated provisioning on bare metal, enhancing your edge infrastructure with SLE Micro & RKE2. This provisioning system needs servers with a BMC to work. This is one of the provisioning solutions that SUSE Edge offers. 
  • MetalLB Support: Solves the lack of Kubernetes native network load balancers for bare-metal clusters, ensuring functionality across non-IaaS platforms.
  • SUSE Edge Stack Validation, will help our customers to rest assured that deploying SUSE Edge 3.0 following our recommendations will cover their use cases, it will help to avoid troubles and misconfigurations helping them to be successful at the Edge.
  • Fully Air-gapped and connected support, to help our customers adapt to the different situations at the Edge.

Additional Features

  • Akri for Industrial IoT Management: Akri detects heterogeneous leaf devices, such as IP cameras and USB devices, adding them as resources within a Kubernetes cluster, along with embedded hardware resources such as GPUs and FPGAs. Enabling a common way to manage IoT devices from Kubernetes. [Tech Preview]
  • Two-node HA with Synadia NATS [Tech Preview]
  • Mesh Expansion with Buoyant Linkerd
  • Multi-Rancher Edge Visibility UI: New implementation using an extension to have visual information of all your managed clusters, their location, and state from a single UI. [Tech Preview]

SUSE Edge 3.0 Benefits at the Edge

The enhancements in SUSE Edge 3.0 are designed to simplify the management of large-scale, distributed Edge environments. They provide reliable, scalable solutions that reduce the complexity of deployments and ensure high availability across your network. As Edge computing continues to evolve, SUSE remains committed to innovation, ensuring our solutions meet your most demanding requirements.

Read more about SUSE Edge:

Come visit us at:

What’s New in SUSE ATIP 3.0?

Friday, 26 April, 2024

SUSE ATIP 3.0

Introducing ATIP

SUSE Adaptive Telco Infrastructure Platform (ATIP) 3.0 represents a significant advancement in telco-optimized edge computing solutions. Designed to empower telecom companies in modernizing their networks, ATIP provides a robust platform for innovation and accelerated deployment of containerized network functions (CNFs) in a cloud-native environment.

 

Addressing Telco Challenges

Telecommunications is a fiercely competitive and complex industry, especially amidst the shift from virtualized to containerized network functions. Telecom operators require advanced tools and partnerships to navigate this landscape successfully. In Europe, major CSP players have united under the Linux Foundation’s Sylva Project, aiming to develop a standardized cloud platform tailored to Telco needs. SUSE actively contributes its expertise and technologies to this initiative. ATIP emerged as a commercial implementation of the Sylva Project’s goals, offering added value through lower energy consumption and a commitment to open-source principles.

 

Components of ATIP

ATIP 3.0 builds upon the robust foundation of SUSE Edge Platform, customized specifically for Telco operators’ requirements. Leveraging SUSE and Rancher’s collective collective expertise and accumulated learning, ATIP delivers an enterprise-ready implementation of the Sylva Project, equipped to meet the diverse needs of our customers.

To simplify we can divide ATIP in two parts, Management and Runtime. The Runtime Stack is what we will deploy in the different locations to run the workloads and provide the service to the customers/users. The Management Stack will be at a centralized location or many, depending on the implementation, managing and adding capabilities to the Runtime Stack.

 

Solution Architecture

 

SUSE ATIP 3.0 architecture

SUSE ATIP 3.0 Stack Architecture

 

ATIP 3.0’s management stack revolves around Rancher Prime, offering crucial tools for observability, logging, and GitOps-based management across multi-cluster and multi-cloud environments. Emphasizing management-as-code and automation, ATIP streamlines operations for bare metal and multi-cluster setups. At the runtime layer, SUSE Linux Enterprise Micro (SLE Micro) offers a transactional and immutable operating system, enhancing security and facilitating easy rollback to avoid service interruptions. Complemented by RKE2, a lightweight Kubernetes distribution focused on security and compliance, ATIP ensures a stable and secure environment for Telco-grade CNFs. Additional components such as NeuVector for container security and Longhorn for cloud-native storage further enrich the platform’s capabilities.

 

ATIP 3.0 new Features 

ATIP is designed to provide the highest level of automation possible and leverage CAPI and Metal3 alongside Rancher Prime and GitOps methodologies to set up a Zero Touch Provisioning solution that uses Git as a single source of truth for your cluster deployments and management. 

This solution offers multiple provisioning options. With the new Edge Image Builder, you can create a custom cluster from the operating system up to Kubernetes ready to deploy. Elemental enables a ‘call home’ feature for pre-configured nodes, allowing them to receive instructions from a central management system. Additionally, the platform integrates with CAPI, offering even more flexibility for provisioning and managing edge clusters.

ATIP 3.0 allows configuring and tuning SLE Micro, K3s, or RKE2 clusters at scale to implement the CNFs necessary for the latest 5G deployments, enabling a fully automated workflow. But it doesn’t stop there; there are many new features:

  • Sylva Project Alignment: ATIP 3.0, as a commercial implementation of the Sylva Project, closely aligns with the reference architectures and implementations of the Sylva Project to provide a supported and enterprise-ready solution for Telco operators.
  • Edge Image Builder (EIB): EIB automates the creation of complete OS images for edge deployments, streamlining the setup from the operating system to Kubernetes using just a configuration file. It also offers advanced customizations with custom scripts, essential for Telco deployments. EIB, in combination with CAPI and Metal3, simplifies the provisioning process for bare-metal clusters and improves compliance, avoiding misconfigurations and security issues.
  • Bare-metal cluster provisioning via CAPI and Metal3: This combination of technologies, along with the integration of Rancher Prime and GitOps methodologies, allows ATIP 3.0 to deliver Zero Touch Provisioning capability, providing one of the fastest provisioning workflows in the market for bare-metal. This provisioning system needs servers with a BMC to work. However, this is just one of the three provisioning methods that ATIP provides.
  • Rolling in-place upgrade via CAPI: Building on what was said before, CAPI makes it easy to set up and upgrade clusters. There are two ways to upgrade clusters on bare metal. The first way is to update each node individually until the whole cluster is updated. The second way is to install the new version of Kubernetes on new nodes and remove the old ones.
  • Telco Profiles for SLE Micro and RKE2: These specific templates provide the necessary configurations for standard Telco setups.
  • MetalLB support for downstream clusters: Bare-metal deployments lack load balancers found in IaaS platforms like AWS. MetalLB provides this functionality as a deployment on Kubernetes. SUSE has created the “Endcopier operator” to improve MetalLB behavior by load balancing the RKE2/K3s Kubernetes API in multi-node deployments.
  • Fully Air-gapped and connected support: This feature helps our customers adapt to different Edge situations. 

 

Tech preview features:

  • Multi-Rancher Edge Observability UI: This new implementation, using an extension, provides visual information on all your managed clusters, their location, and state from a single UI. It simplifies the operator’s job while avoiding disruptions to your infrastructure.

 

How ATIP 3.0 Benefits You

ATIP 3.0 empowers Telco operators to deploy and manage infrastructure with greater ease, speed, and efficiency. By adopting vendor-neutral APIs and leveraging open-source technologies, ATIP accelerates time-to-market while minimizing costs. Moreover, ATIP’s energy-efficient design aligns with sustainability goals, making it a responsible choice for both businesses and the environment.

In conclusion, SUSE ATIP 3.0 represents a significant step forward in Telco Edge Computing, offering a comprehensive solution tailored to the evolving needs of the telecommunications industry.

Read more about ATIP 3.0:

Come visit us at:

Synergy Unleashed: the Dynamic Collaboration of SUSE and Krumware in Platform Engineering

Thursday, 25 April, 2024

SUSE GUEST BLOG AUTHORED BY:

Colin Griffin, Founder and CEO at Krumware

 

The cloud-native landscape has forever changed the way organizations develop, implement, and manage software systems. Now, more than ever, organizations have the flexibility to control their own clouds, but with great freedom comes great responsibility and complexity. 

Krumware has been developing cloud-native technologies and applications alongside the evolution of containers and Kubernetes, since 2016. With practices developed through first-hand experience, Krumware provides hands-on application and platform engineering support to help organizations fill critical gaps in cloud-native software development and management.

The SUSE-Krumware strategic partnership leverages deep software development expertise from Krumware and unmatched infrastructure expertise from SUSE to develop well-integrated and tested platform patterns and tools that meet the needs of many types of users, dramatically improving speed to market, team interaction, and cloud platform maturity.

Together Krumware and SUSE provide complete support for organizations seeking to provide their people with robust self-service platforms, tools, and capabilities that will help them thrive. We’re excited to introduce comprehensive platform solutions and support for Operators, Developers, AI/ML, data commons, and other critical IT business needs.

 

 

Customer Challenges 

Platform Engineering is hard. It presents a significant implementation journey for organizations, because there is extensive work required to build the services and tools that the developers and business require, and not just the platform foundations. IT organizations are finding this out the hard way as they are tasked to provide self-service capabilities to developers, data, ops, AI/ML teams, DevOps, and beyond, both internal or external.

The difficult reality is that Platform Engineering requires a combination of skillsets that extend beyond the capabilities of many infrastructure teams, such as application development and product management.  From the perspective of the users, platform needs are highly contextual; they vary team-by-team and oftentimes project-by-project. This need has driven the formation of platform engineering teams, but many organizations are not yet equipped.

According to Gartner (Gartner, 2023), “by 2026, 80% of large software engineering organizations will establish platform engineering teams as internal providers of reusable services, components and tools for application delivery. Platform engineering will ultimately solve the central problem of cooperation between software developers and operators.

 

How this solution addresses the challenges

Krumware is a unique technical partner that actively uses SUSE’s Prime solutions in practice. Krumware themselves operate as cross-functional development and platform teams, and establish integrations, practices, and golden paths for the benefit of their teams and partners.

A well-architected platform is open and extensible and provides the right tools at the right time to the right people. With this approach, SUSE Rancher Prime is the ideal core cloud platform of choice for platform teams and the most important piece of the platform puzzle. Krumware layers on complementary tools and patterns to provide Platform Stacks, which deliver turnkey solutions for common scenarios such as internal developer platforms. 

For Day 2, SUSE provides unmatched product support and Krumware provides hands-on software development and platform engineering support, to fill skill gaps and not only help run the cloud, but help teams work together while taking full advantage of the cloud.

Don’t waste time planning for tomorrow, start today. With SUSE Prime Solutions and Krumware, shortcut your cloud maturity and platform team formation to give your teams and business a permanent edge.

SUSE and Krumware are excited to help organizations build better software and meet the IT needs of their people, together.

 

Next steps:

Do you need help with:

  • Bespoke tools and applications, and application delivery
  • Platform engineering and integration
  • Cloud platform strategy and modernization

Contact SUSE and Krumware to get started with bundled support

 

SUSE One Partner Program, Innovate specialization

Join our SUSE One Partner Program and become an Innovate partner to collaborate on innovation, leverage market trends, and enhance customer experiences. 

Enroll in our Innovate specialization and work with us to ensure the compatibility of software and hardware built on or with SUSE offerings, and make your customers secure that your solutions have been validated for functionality, performance, and reliability.

 

AUTHOR BIO

Colin Griffin is Founder and CEO at Krumware.

He is a practicing software engineer, specializing in cloud-native application and infrastructure development; with an emphasis on developer enablement and platform engineering and is a Co-Lead of the CNCF Platforms Working Group. He founded Krumware with the goal of enabling companies to build better environments for their developers, Krumware helps organizations build better software.

 

5 Reasons Why Linux Choice Matters

Tuesday, 2 April, 2024

Navigating Choice in Open SourceSUSE’s open source ethos has always been about choice, transparency and community. SUSE released its first enterprise Linux distro over 30 years ago. And we’ve been managing multi Linux distributions for over 20 years with SUSE Manager. It is also why SUSE released SUSE Liberty Linux. SUSE Liberty Linux is a technology and solutions offering that lets you keep your desired Linux OS (CentOS or RHEL) and get your security patches, maintenance updates and technical support from SUSE. In today’s open source landscape, this is particularly important – especially if you are running a Linux distro that is rapidly reaching end of life.

You might be thinking, “There are a number of distros that I can choose from, why should I choose SUSE Liberty Linux?” Here are 5 very good reasons why SUSE Liberty Linux should be at the top of your list.

Zero migration

Choosing SUSE Liberty Linux means that Jul 1, 2024 is just another day in your data center. SUSE Liberty Linux is a technology and support solution that lets you continue using the RHEL 7 and CentOS workloads you already have, while providing you with a unified support experience for managing your heterogeneous IT environments. This means zero risky migrations, zero retraining, and zero disruptions. And because we know how priceless security is to your data center, we proactively provide CVE security patches and maintenance updates on a regular cadence. So when considering other options, consider the cost of migration, retraining and retesting – then consider SUSE Liberty Linux.

Full compatibility

SUSE Liberty Linux is fully compatible on the application binary interface level with the versions of RHEL currently supported and available today, and with CentOS. User-space applications that run on RHEL or CentOS are expected to run with equivalent performance and functionality on SUSE Liberty Linux.*

We backport fixes into the CentOS code, the same way we do with our own software—keeping 100% compatibility of API and application binary interface (ABI). We’re fixing the vulnerabilities you have without breaking anything. We’ve done this for decades. Now, we’re happy to do it for you.

User-space applications that run on RHEL are expected to run with equivalent performance and functionality on SUSE Liberty Linux. Factors in each customer’s environment will impact the performance of the OS and the applications and so we advise customers to thoroughly test their specific applications on their OS prior to running production workloads.

SUSE stands behind its SUSE Liberty Linux offering to the same degree we do all our products and has decades’ experience in assisting customers migrate between, and manage, different enterprise Linux workloads.

* Factors in each customer’s environment will impact the performance of the OS and the applications and so we advise customers to thoroughly test their specific applications on their OS prior to running production workloads.

Backed by SUSE

SUSE Liberty Linux is backed by SUSE. SUSE is the company that released the very first Linux distribution way back in 1992. That means for more than 30 years SUSE has been the vanguard of open source technologies.

SUSE has an impressive bench of software engineers that eat, breathe, and sleep open source. We are heavily involved and contribute to the open source communities on a regular basis. We believe in transparency and openness and we will not only adhere to the letter of the open source but also the spirit of openness. You have the right to have a choice; we’re here to support you.

Support for your entire infrastructure

SUSE makes Linux, but we manage multiple Linuxes. We understand that heterogeneity is the reality in most businesses. But managing and supporting a heterogeneous Linux environment can be hard, complex and time consuming. SUSE has simplified this:

  • With SUSE Manager that manages over 16 different Linux distros from a single pane of glass.
  • With a world-class support team that provides support for not only our SLES, but also for your other distros.
  • With optional support services that provide a named premium support resource for your entire infrastructure.

Services that lead to success

SUSE Global Services has the expertise to help you navigate the open source landscape. Whether you choose to transition to SUSE Liberty Linux or migrate to SLES, we have the expertise available to ensure your transition is a success. Just some of the services we offer are:

  • Premium Support Services provides you with direct access to a named engineer that can support your entire environment, including RHEL, CentOS, SLES and Liberty.
  • Consulting engagements to optimize your transition to Liberty. These range in scope from validating the health of your current infrastructure to implementing solutions to simplify management of your current infrastructure through migration services to design and implement a migration to SLES.
  • eLearning subscriptions that provide limitless learning opportunities for you and your team to learn about SUSE technologies on your time.

Summary

As you make your decision on what to do with your current infrastructure, consider your choices carefully. Consider SUSE Liberty Linux.

Curious to learn more? Watch the on demand webinar or reach out to your account rep today!