Delivering Inspiring Retail Experiences with Rancher

Wednesday, 27 May, 2020

“As our business grew, we knew there would be economies in working with an orchestration partner. Rancher and Kubernetes have become enablers for the growth of our business.” – Joost Hofman, Head of Site Reliability Engineering Digital Development, Albert Heijn

When it comes to deciding where and how to shop for food, consumers have a choice. And it may only take one negative experience with a retailer for a consumer to take their business elsewhere. For food retail leader Albert Heijn, customer satisfaction and innovation at its 950+ retail stores and e-commerce site are driving forces. As the top food retailer in the Netherlands (and with stores in Belgium), the company works to inspire, surprise and provide rewarding experiences to its customers – and has a mission to be the most loved and healthiest company in the Netherlands.

Adopting Containers for Innovation and Scalability

Not surprisingly, the fastest growing part of Albert Heijn’s business is its e-commerce site – with millions of visitors each year and expectations for those numbers to double in the coming years. With a focus on the future of grocery shopping and sustainability, Albert Heijn is at the forefront of container adoption in the retail space. Since first experimenting with containers in 2016, they are now the preferred way for the company’s 200 developers to manage the continuous development process and run many services on e-commerce site AH.nl in production. By using containers, developers can push new features to the e-commerce site faster – improving customer experience and loyalty.

Before adopting containers, Hofman’s team ran a traditional, monolithic infrastructure that was costly and unwieldy. With a vision of unified microservices and an open API to support future growth, they started experimenting with containers in 2016. While they experienced uptime of 99.95 percent after just six months, they faced other challenges and realized they needed a container management solution.

In 2018, Hofman turned to Rancher as the platform to manage its containers more effectively as they migrated to an Azure cloud. Today, with Rancher, their infrastructure is set up to scale, as the user numbers are expected to grow dramatically. With Rancher automating a host of basic processes, developers are free to innovate.

High availability is also a critical need for the company – because online shopping never sleeps. With a microservices-based environment built on Kubernetes and Rancher, developers can develop, test and deploy services in isolation and ensure reliable, fast releases of new services.

Today, with a container-based infrastructure, the company has reduced management hours and testing time by 80 percent and achieved 99.95 percent uptime.

Read our case study to hear how, with Rancher, Hofman and the AH.nl team have embraced containers as a way to focus on innovation and staying ahead of the competition.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Rancher Academy Has Moved!

Tuesday, 19 May, 2020

Editor’s Note: Since the launch of Rancher Academy in 2020, a lot has happened. Rancher Academy has evolved into Academy classes, now available in the SUSE & Rancher Community. Our Up and Running: Rancher class aligns with the latest release of Rancher (Rancher 2.6). The class is available on demand. Other Rancher Academy Classes include Up and Running: K3s and Accelerate Dev Workflows.

Today we launched the Rancher Academy, our new free training portal. The first course is Certified Rancher Operator: Level 1, and in it you’ll learn exactly how to deploy Rancher and use it to deploy and manage Kubernetes clusters. This professional certification program is designed to help Kubernetes practitioners demonstrate their knowledge and competence with Kubernetes and Rancher – and to advance in their careers.

Why is Rancher Labs doing this? We want all the members of our community to have the most relevant and up to date skills in Rancher – the most widely adopted Kubernetes management platform. We also want to give you the skills to be at the forefront of the cloud-native way of doing business, which is agile, open source oriented and maniacally focused on innovation.

Market Demand for Kubernetes Skills Far Exceeds Supply

We’re seeing massive demand in the industry for people with Kubernetes skills, and it’s continuing to rise as organizations adopt cloud-native strategies and embrace Kubernetes. There’s nowhere near enough supply right now. What I’m seeing in the industry is that organizations are trying to quickly get their teams up to speed on Kubernetes. You’ve got people with reliable non-cloud-native skill sets, or non-Kubernetes skill sets, who are suddenly being given these Kubernetes environments that they need to maintain.

Businesses and governments all over the world use Rancher to deploy and manage their Kubernetes clusters. People in those organizations are working with Rancher, but they might have learned it through the filter of their past knowledge and experience.

That’s where Rancher Academy comes in. Our objective is for an individual to go to an organization and say, “I have a certification from Rancher Labs,” and for organizations to know exactly what that means: that they were trained by Rancher, and that we’ve given our approval of their ability to execute according to our standards.

Is Rancher Academy Right for You?

Our first course, Up and Running: Rancher, is designed for people who want to install Rancher and use it to deploy and manage Kubernetes clusters. You’ll need to have some basic Kubernetes knowledge, but you don’t need to know anything about Rancher.

We intentionally chose not to include Docker or Kubernetes fundamentals in our course because there are other training courses that cover that material. Until today there were no courses specifically for Rancher.

The course starts off talking about the Rancher architecture, installing RKE (one of our two certified Kubernetes distributions) and installing Rancher into it. Whether you are brand new to Rancher or have been using it for a while, you’ll find value in the course. For those experienced with Rancher, you’ll validate the skills you already have, and perhaps learn some slightly different ways of doing things that are the official “Rancher-sanctioned” way. And for those new to Rancher, you’ll walk away with the confidence that you are using Rancher the right way.

Rancher Academy: How It Works

The program is online and self-paced, with Level 1 designed to be completed over five weeks. The course includes four hours of video content, with 87 units of instruction, quizzes, 37 hands-on labs and a final assessment.

The labs are designed for you to do on your own. The idea here is that we’re building muscle memory: you learn about it, you see it demonstrated and then you do it yourself. As you progress through the course, you build and maintain an infrastructure, and by the end of the course, you’ll have a highly available Rancher deployment with at least one downstream cluster.

Now you might be saying, “Whoa, this sounds like a lot of work.” The beauty of the course is that it’s self-paced. If you follow the five-week model, you’ll need to spend about three to five hours a week. On the other hand, if you’re so excited that you want to just blaze through it in a week, you can do that.

Throughout the course, as you’re learning the material, you can validate what you’ve learned through the quizzes, and you can easily go back if you need to repeat something. Along the way you’ll be building an environment, testing out workloads, trying out persistent storage and encountering challenges that are unique to your infrastructure. You’ll be developing the skills to solve those challenges, and you can get help along the way from the community.

 

 

Has Hybrid Cloud Finally Come of Age?

Thursday, 14 May, 2020

Hybrid cloud continues to be a hot topic within the IT industry.

That’s pretty amazing, because it seems like we’ve been talking about the concept for an eternity. Every cloud-related study or survey shows that it remains top of mind for enterprise business leaders and IT decision-makers. Just last month, I read that 87% of enterprises have now embraced a hybrid cloud strategy.

The attraction of a hybrid cloud approach is obvious. It makes it easier to run critical applications, workloads, services, and data on (or across) the most appropriate platforms. It also makes it possible to seamlessly rebalance or move them whenever needed. That all adds up to improved agility, flexibility, productivity, and scalability.

What exactly is a hybrid cloud?

That can be a difficult question to answer, because definitions are often a little hazy.

A hybrid cloud is normally two or more cloud platforms (usually a mix of public and private clouds) combined into a single infrastructure. This consolidated environment can then be controlled by a unified set of management tools, making it possible to move the applications between platforms or build them to span multiple clouds.

However, some organizations use the term “hybrid cloud” differently. Sometimes it describes the use of multiple independent cloud platforms (either private or public or both). In this multi-cloud scenario, applications are individually deployed to the most appropriate platform for the workload to optimize performance, functionality, and cost.

However, what’s in a name? “A rose by any other name would smell as sweet”, right? Frankly, it is how your business chooses to define “hybrid cloud” that counts. You get to choose the strategy and infrastructure that is most appropriate for your organization.

The key considerations are:

  • What business advantages will your cloud strategy deliver?
  • What impact will it have on economy, performance, uptime, customer experience, and competitive advantage?

Those are critical factors for all of us to ponder – especially this year. The global COVID-19 emergency is putting business efficiency, productivity, and cost even more under the microscope. More of us are working remotely and that requires a rapid adjustment to how we use edge, core, and cloud services.

Open source is smoothing the path to hybrid clouds

Cloud computing has reached such a level of maturity and acceptance that it is now an indispensable component in virtually all the IT systems we rely on every day. It has become so omnipresent that we may as well drop the “cloud” label and just call it “computing”. It is a fundamental part of the software-defined infrastructures that enables our increasingly digitalized and data-driven world.

What matters most is what we do with all the agile computing capacity and capability we now have at our disposal. Which workloads will we migrate next and how will we modernize or enhance our existing applications for a cloud environment? Even more importantly, how will we architect, build, and manage the next generation of cloud-native applications and services?

In the past, implementing hybrid clouds was incredibly difficult. This is partly because each cloud platform has subtle but significant functional differences that make hybrid management painful.

But today, collaborative open source technologies are making things a whole lot easier. Open source projects such as Linux, Kubernetes and Cloud Foundry are at the heart of virtually all cloud-native computing solutions. They can be used to create a consistent environment on any cloud, making it possible to design genuine containerized cloud-native applications that are seamlessly portable across platforms. Hence, mature and enterprise-grade hybrid environments are now a reality.

If you’d like to know more, SUSECONdigital’20 is starting on May 20th.

Hybrid and multi-cloud is one of the key themes for the event. You can hear from SUSE specialists, customers and partners on how to make the most of the latest technology and strategies for your business.

Why not sign up for one of the following free online sessions:

Thanks for reading!  More info on hybrid cloud solutions from SUSE can be found here:   https://www.suse.com/solutions/managing-hybrid-clouds/

Jeff Reser

@JeffReserNC

Driving Sustainability in Retail with Kubernetes

Tuesday, 28 April, 2020

“With sustainability our primary focus, our technology strategy has to mirror our overall approach. With Rancher we’re driving real transformation to prime us for long-term growth.”
– Zach Dunn, Senior Director of Platform Operations and CSO, Optoro

Have you ever considered what happens to items you return to etailers? In retail, especially ecommerce, nearly 25 percent of all goods are returned or don’t sell. And in the US alone, the value of these goods is a staggering $500 billion – usually written off as losses by etailers. Beyond the economic impact, there are environmental consequences: many goods ending up in landfills.

Optoro aims to break this cycle. As the world’s leading returns optimization platform, Optoro has pioneered a reverse logistics model to solve this excess goods problem. Using machine learning and predictive analytics, they route returned and excess goods to their next best home, whether it’s an end consumer, charity or recycler – anywhere in the world. Optoro operates a consumer resale site,  www.blinq.com, and a wholesale site, www.BULQ.com. The company estimates that they have diverted 3.9 million pounds of waste from landfills, prevented 22.7 million pounds of carbon emissions, and donated 2.7 million items to charities.

Soon after joining the company, Senior Director of Platform Operations and CSO Zack Dunn decided to move the company from a cloud-based infrastructure to on premises. Optoro had a steady state in terms of costs; APIs and databases were never powered down and so costs would increase or decrease with cloud expansion and contraction. Transitioning into a data center, Dunn could level-set his costs – driving greater predictability into financial management.

After converting their estate of VMs into Docker containers, Dunn and his team started experimenting with Kubernetes. While a move Kubernetes made sense, they didn’t want to absorb additional costs – and could not find a business case for OpenShift, GKE or EKS. They wanted a platform that allowed their developers to consolidate role-based access control and other backend processes and directly manage their clusters through an intuitive UI. Rancher checked all the boxes.

Following a successful proof of concept, the team started to migrate its services into containers and into Rancher. Currently it runs 12 of its 42 services in production, with plans to migrate the entire infrastructure.

Watch our video case study and hear directly from Dunn about Optoro’s journey from the cloud to the data center and the benefits of adopting Kubernetes and Rancher.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Enabling More Effective Kubernetes Troubleshooting on Rancher

Thursday, 16 April, 2020

As a leading, open-source multi-cluster orchestration platform, Rancher lets operations teams deploy, manage and secure enterprise Kubernetes. Rancher also gives users a set of container network interface (CNI) options to choose from, including open source Project Calico. Calico provides native Layer 3 routing capability for Kubernetes pods, which simplifies the networking architecture, increases networking performance and provides a rich network policy model makes it easy to lock down communication so the only traffic that flows is the traffic you want to flow.

A common challenge in deploying Kubernetes is gaining the necessary visibility into the cluster environment to effectively monitor and troubleshoot networking and security issues. Visibility and troubleshooting is one of the top three Kubernetes use cases that we see at Tigera. It’s especially critical in production deployments because downtime is expensive and distributed applications are extremely hard to troubleshoot. If you’re with the platform team, you’re under pressure to meet SLAs. If you’re on the DevOps team, you have production workloads you need to launch. For both teams, the common goal is to resolve the problem as quickly as possible.

Why Troubleshooting Kubernetes is Challenging

Since Kubernetes workloads are extremely dynamic, connectivity issues are difficult to resolve. Conventional network monitoring tools were designed for static environments. They don’t understand Kubernetes context and are not effective when applied to Kubernetes. Without Kubernetes-specific diagnostic tools, troubleshooting for platform teams is an exercise in frustration. For example, when a pod-to-pod connection is denied, it’s nearly impossible to identify which network security policy denied the traffic. You can manually log in to nodes and review system logs, but this is neither practical nor scalable.

You’ll need a way to quickly pinpoint the source of any connectivity or security issue. Or better yet, gain insight to avoid issues in the first place. As Kubernetes deployments scale up, the limitations around visibility, monitoring and logging can result in undiagnosed system failures that cause service interruptions and impact customer satisfaction and your business.

Flow Logs and Flow Visualization

For Rancher users who are running production environments, Calico Enterprise network flow logs provide a strong foundation for troubleshooting Kubernetes networking and security issues. For example, flow logs can be used to run queries to analyze all traffic from a given namespace or workload label. But to effectively troubleshoot your Kubernetes environment, you’ll need flow logs with Kubernetes-specific data like pod, label and namespace, and which policies accepted or denied the connection.

Calico Enterprise Flow Visualizer
Calico Enterprise Flow Visualizer

A large proportion of Rancher users are DevOps teams. While ITOps has traditionally managed network and security policy, we see DevOps teams looking for solutions that enable self-sufficiency and accelerate the CI/CD pipeline. For Rancher users who are running production environments, Calico Enterprise includes a Flow Visualizer, a powerful tool that simplifies connectivity troubleshooting. It’s a more intuitive way to interact with and drill down into network flows. DevOps can use this tool for troubleshooting and policy creation, while ITOps can establish a policy hierarchy using RBAC to implement guardrails so DevOps teams don’t override any enterprisewide policies.

Firewalls Can Create a Visibility Void for Security Teams

Kubernetes workloads make heavy use of the network and generate a lot of east/west traffic. If you are deploying a conventional firewall within your Kubernetes architecture, you will lose all visibility into this traffic and the ability to troubleshoot. Firewalls don’t have the context required to understand Kubernetes traffic (namespace, pod, labels, container id, etc.). This makes it impossible to troubleshoot networking issues, perform forensic analysis or report on security controls for compliance.

To get the visibility they need, Rancher users can deploy Calico Enterprise to translate zone-based firewall rules into Kubernetes network policies that segment the cluster into zones and apply the correct firewall rules. Your existing firewalls and firewall managers can then be used to define zones and create rules in Kubernetes the same way all other rules have been created. Traffic crossing zones can be sent to the Security team’s security information and event management (SIEM), providing them with the same visibility for troubleshooting purposes that they would have received using their conventional firewall.

Other Kubernetes Troubleshooting Considerations

For Platform, Networking, DevOps and Security teams using the Rancher platform, Tigera provides additional visibility and monitoring tools that facilitate faster troubleshooting:

  • The ability to add thresholds and alarms to all of your monitored data. For example, a spike in denied traffic triggers an alarm to your DevOps team or Security Operations Center (SOC) for further investigation.
  • Filters that enable you to drill down by namespace, pod and view status (such as allowed or denied traffic)
  • The ability to store logs in an EFK (Elasticsearch, Fluentd and Kibana) stack for future accessibility

Whether you are in the early stages of your Kubernetes journey and simply want to understand the “why” of unexpected cluster behavior, or you are in large-scale production with revenue-generating workloads, having the right tools to effectively troubleshoot will help you avoid downtime and service disruption. During the upcoming Master Class, we’ll share troubleshooting tips and demonstrate some of the tools covered in this blog, including flow logs and Flow Visualizer.

Join our free Master Class: Enabling More Effective Kubernetes Troubleshooting on Rancher on May 7 at 1pm PT.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

SUSE Announces Free Access To Online Training

Friday, 10 April, 2020

As I mentioned in my recent blog, Fast Track Your Digital Transformation Today, the world is facing a complex and ever changing landscape of travel restrictions, school closures and work from home policies as it grapples to restrict the spread of COVID-19.

In light of the unprecedented and disruptive conditions that many of our customers find themselves operating their businesses in I am pleased to be able to announce an offer, to help alleviate this, by providing free access to SUSE online training.

Many of our customers have adopted work-from-home mandates around the world due to COVID-19.  This presents a challenge for IT data center and cloud professionals who need to keep their technology skills up to date. Now is an excellent time to get your data center and cloud teams more skilled on SUSE technologies and with that in mind we have a great offer for you.

Beginning mid-April, SUSE will make available course content from select, existing training videos on the SUSE Technical Training YouTube channel.

This offer is split into two initiatives: For the first initiative around our technologies such as SLES Administration and Clustering, we are providing videos for the opening four sections of featured course titles. The videos for these featured courses will be available for a limited time through June 30.  If your team is looking to further hone their skills on these or other SUSE technologies, then reach out to one of our training partners.  Our training partners can offer hands on, lab intensive training in a virtual instructor-led training environment that is ideal for remote employees.

For the second initiative around our technologies like software-defined storage, container administration and container application development, SUSE will provide videos of the complete courses for anyone who registers to the Accelerate Innovation Offer. These will be available through September 15, 2020. Again, if you need your employees to have the full virtual instructor-led course experience, then reach out to our training partners.

If you have any questions please do not hesitate to contact your SUSE representative and in the meantime, we hope this gesture helps customers and partners to maintain business continuity and sharpen skill levels in uncertain times.

Privacy Protections, PCI Compliance and Vulnerability Management for Kubernetes

Wednesday, 8 April, 2020

Containers are becoming the new computing standard for many businesses. New technology does not protect you from traditional security concerns. If your containers handle any sensitive data, including personally identifiable information (PII), credit cards or accounts, you’ll need to take a ‘defense in depth’ approach to container security. The CI/CD pipeline is vulnerable at every stage, from build to ship to runtime.

In this article, we’ll look at best practices for protecting sensitive data and enforcing compliance, from vulnerability management to network segmentation. We’ll also discuss how NeuVector simplifies security, privacy and compliance throughout the container lifecycle for organizations using Rancher’s kubernetes management platform.

Shift-Left Security

The DevOps movement is all about shifting left, and security is no different. The more security we can build in earlier in the process, the better for developers and the security team. The concept of security policy as code puts more control into developers hands while ensuring compliance with security mandates. Best practices include:

Comprehensive vulnerability management

Vulnerability detection and management throughout the CI/CD pipeline is essential. In order to prevent vulnerabilities from being introduced into registries, organizations should create policy-based build success/failure criteria. As a further safeguard, they should monitor and auto-scan all major registries such as AWS Elastic Container Registry, Docker, Azure Container Registry (ACR) and jFrog Artifactory. And finally, they should automatically scan running containers and host OSes for vulnerabilities to prevent exploits and other attacks on critical business data. With an auto-scanning infrastructure in place, containers can be auto-quarantined based on a vulnerability criteria.

Recommendation:

  • Scan the Rancher OS (or other OS)
  • Integrate and automate scanning with Jenkins plug-in or other build-phase scanning extensions, plus registry scanning
  • Employ admission control to prevent deployment of vulnerable images
  • Scan running containers and hosts for vulnerabilities, preventing ‘back-door’ vulnerable images
  • Protect running containers from vulnerability exploits with ‘virtual patching’ or other security controls to prevent unauthorized network or container behavior.

Adherence to the Center for Internet Security Benchmarks for Kubernetes and Docker

The CIS benchmarks provide strong security auditing for container, orchestrator and host configurations to ensure that proper security controls are not overlooked or disabled. These checks should be run before containers are put into production, and continuously run after deployment, as updates and restarts can often change such critical configurations. Patching, updating and restarting hosts can also inadvertently open security holes that were previously locked down.

Recommendation:

  • Use CIS Scan in Rancher 2.4 to run CIS benchmarks for Rancher managed Kubernetes clusters and the containers running on them.
  • Augment CIS benchmarks with any customized auditing or compliance checks on hosts or containers which are required by your organization.

Privacy

Privacy is a critical component of many compliance standards. However, container environments raise PCI Data Security Standard (DSS) – and likely GDPR and HIPAA – compliance challenges in the areas of monitoring, establishing security controls and limiting the scope of the Cardholder Data Environment (CDE) with network segmentation. Due to the ephemeral nature of containers – spinning up and down quickly and dynamically, and often only existing for several minutes – monitoring and security solutions must be active in real-time and able to automatically respond to rapidly transforming attacks.

Because most container traffic is internal communication between containers, traditional firewalls and security systems designed to vet external traffic are blind to nefarious threats that may escalate within the container environment. And the use of containers can increase the CDE, requiring critical protections to the size of the entire microservices environment unless limited by a container firewall able to fully visualize and tightly control its scope.

Recommendation:

  • Inspect network connections from containers within and exiting the Rancher cluster for unencrypted credit card or Personally Identifiable Information (PII) data using network DLP
  • Provide the required network segmentation for in-scope (CDE) traffic for application containers deployed by and run on Rancher

Compliance (PCI, GDPR, HIPAA and More)

Containers and microservices are inherently supportive of PCI DSS compliance across several fronts. In an ideal microservices architecture, each service and container delivers a single function, which is congruent with the PCI DSS requirement to implement only a single primary function with each server. In the same way, containers provide narrow functionality by design, meeting the PCI DSS mandate to enable only necessary protocols and services.

One might think that physically separate container environments that are in-scope would resolve issues, but this can severely restrict modern automated DevOps CI/CD pipelines and result in slower release cycles and underused resources. However, cloud-native container firewalls are emerging which provide the required network segmentation without the sacrifice of the business benefits of containers.

Recommendation:

  • Deploy a cloud-native firewall to automate network segmentation required by compliance standards such as PCI.
  • Maintain forensic data, logs and notifications for security events and other changes.

How NeuVector Enhances Rancher Security

NeuVector extends Rancher’s capabilities to support and enforce PCI-DSS, GDPR and HIPAA compliance requirements by auditing, monitoring and securing production deployments built on Rancher including:

  • Providing a comprehensive vulnerability management platform integrated with Rancher admission controls and run-time visibility.
  • Enforcing network segmentation based on layer 7 application protocols, so that no unauthorized connections are allowed in or out of containers.
  • Enforcing that encrypted SSL connections are used for transmitting sensitive data between containers and for ingress/egress connections.
  • Monitoring all unencrypted connections for sensitive data and either alerting or blocking when detected.

The NeuVector container security platform is an end-to-end solution for securing the entire container pipeline from build to ship to run-time. The industry’s first container firewall provides the critical function to perform automated network segmentation and container DLP by inspecting all container connections for sensitive data such as credit cards, PII and financial data. The screen shot below shows an example of unencrypted credit card data being transmitted between pods, as well as to an external destination.

Image 1

A container firewall solution provides network segmentation, network monitoring and encryption verification – meeting regulatory compliance requirements. PCI-DSS requires network segmentation as well as encryption for in-scope CDE environments. The NeuVector container firewall provides the required network segmentation of CDE workloads, while at the same time monitoring for unencrypted cardholder data which would violate the compliance requirements. The violations can be the first indications of a data breach, a misconfiguration of an application container or an innocent mistake made by a customer support person pasting in credit card data into a case.

Next Steps for Securing Your Container Infrastructure

For organizations transitioning to container infrastructure, it is important to recognize that security is important throughout the lifecycle of the container. Compliance and privacy regulations require protection of customer’s information wherever it resides on the organization’s network.

In this article, we looked at some of the ways that you can protect sensitive data and enforce compliance in your container infrastructure. To learn more, join us for our free Master Class: How to Automate Privacy Protections, PCI Compliance and Vulnerability Management for Kubernetes on May 5.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

SUSE Manager 4: The Smart Choice for Managing Linux and Comparing the Alternatives – Ansible, Chef, Puppet and SaltStack

Tuesday, 7 April, 2020

“Only SUSE Manager combines software content lifecycle management (CLM) with a centrally staged repository and class-leading configuration management and automation, plus optional state of the art monitoring capabilities, for all major Linux distributions.”

These days, IT departments manage highly dynamic and heterogeneous networks under constantly changing requirements. One important trend that has contributed to the growing complexity is the rise of software-defined infrastructures (SDIs). An SDI consists of a single pool of virtual resources that system administrators can manage efficiently and always in the same way, regardless of whether the resources reside on premise or in the cloud. SUSE Manager is a powerful tool that brings the promise of SDI to Linux server management.

You can use SUSE Manager to manage a diverse pool of Linux systems through their complete lifecycle, including deployment, configuration, auditing and software management. This paper highlights some of the benefits of SUSE Manager and describes how SUSE Manager stacks up against other open source management solutions.

Introducing SUSE Manager

SUSE Manager is a single tool that allows IT staff to provision, configure, manage, update and monitor all the Linux systems on the network in the same way, regardless of how and where they are deployed. From remote installation to cloud orchestration, automatic updates, custom configuration, performance monitoring, compliance and security audits, SUSE Manager 4 deftly handles the full lifecycle of registered Linux clients.

A clean and efficient web interface (or, an equivalent command-line interface) provides a single entry point to all management actions, saving time and allowing a single admin to manage a greater share of network resources.

Discovering SUSE Manager

The diversity of Linux systems can add complexity to the management environment. Time spent managing a large, complex Linux estate with dissimilar tools adds significantly to costs. Your IT staff can be much more efficient with a single tool to automate, coordinate and monitor Linux operations.

SUSE Manager provides unified management of Linux systems regardless of whether the system is running on bare metal, a virtual machine (VM) or a container environment in a server room, private cloud or public cloud. SUSE Manager will even manage Linux systems running on IoT devices, including legacy devices where agents cannot be installed. The ZeroMQ protocol provides parallel communication with client systems, which scales much more efficiently than alternatives that talk to each client one at a time.

SUSE Manager is tightly integrated with SUSE Linux Enterprise, but not limited to it. Previous releases of SUSE Manager could already administer Red Hat, CentOS, OEL and other RPM-based systems. Version 4 adds to the list openSUSE and Ubuntu clients, with Debian coming soon. The SUSE Manager client-side agent is written in Python and is therefore portable. Accompanying APIs allow easy integration with third-party tools, as well as fast, risk-free deployments of complex services. SUSE Manager version 4 includes new tools that make it easier to install and configure both high availability clusters1 and SAP HANA (High-Performance Analytic Appliance) nodes.

SUSE Manager consolidates all the following management tasks into a single tool:

• Deployment – declare how many Linux systems you need and what you need them for, and SUSE Manager does the rest. Admins can build their own ISO images for bare metal, containers or VMs, using either AutoYaST or Kickstart, and installation can be in attended or fully unattended fashion. Integration with the Cobbler installation server allows efficient deployment using Pre-Execution Environment (PXE).

• Software updates – SUSE Manager automates software updates for whole systems or individual packages. A powerful security system guarantees that every package is centrally authorized. You can schedule and execute multiple software updates at once, using one command.

• Configuration management – SUSE Manager supports file-based configuration, as well as state-based configuration management using Salt. The configuration and provisioning tools included with SUSE Manager enable you to define system prototypes and then adapt prototype definitions for easy automation and complex environments.

• Content Lifecycle Management (CLM) – The new CLM interface in SUSE Manager 4 (Figure 1) makes it easier and less expensive to manage software applications and services throughout the DevOps cycle. CLM lets you select and customize software channels, test them and promote them through the stages of the package lifecycle. Promoting an existing channel (rather than rebuilding it) as a package moves from QA to production saves time and adds convenience for IT staff.

• Security – SUSE Manager supports automatic, system-wide configuration and vulnerability scans, using either CVE lists or the OpenSCAP protocol.

• Performance and compliance monitoring – SUSE Manager creates a unified inventory of all systems within the organization, reporting (Figure 2) on any deviation from configuration or security requirements and eliminating “shadow IT” activities from uncontrolled or undocumented systems. An optional add-on for version 4 implements a monitoring and alerting system built on the next generation of Prometheus-based monitoring tools to gain insights and reduce down time.

Figure 1: The CLM interface moves services from testing to production hosts with a few clicks.

Figure 2 Complete inventory and status of all systems, in one efficient interface.

An intuitive GUI interface offers a complete view of the network at a glance, including features (like “Formulas with Forms” – Figure 3), that make SUSE Manager the ideal tool for consistent, highly efficient management of hundreds or thousands of servers. Expert Linux and Unix admins who prefer to work at the command line will find a rich set of text-based commands. The SUSE command-line tool “spacecmd” makes it easy to integrate SUSE Manager functions into admin scripts and homegrown utilities, and SUSE Manager supports Nagios-compatible monitoring with Icinga.

A sensible security system enables you to distribute Linux administration work among the staff according to each employee’s skills and responsibilities. The main administrator of a SUSE Manager server can delegate operations to users at different levels, creating accounts for tasks such as key activation, images, configuration and software distribution.

Figure 3: Salt Formulas can be grouped and applied to single systems or whole groups.

The Open Source Edge

A fully open source development model improves code quality and prevents vendor lock-in. The upstream project for SUSE Manager, Uyuni, is 100 percent open source (Figure 4). The software is developed in the open, on GitHub, with frequent releases and solid, automated testing. Although Uyuni is not commercially supported by SUSE and does not receive the same rigid QA and product lifecycle guarantees, it is not stripped down in any way. Unlike other vendors, whose commercial products heavily rely on extra features not available in the basic, open source version, SUSE keeps the same, full feature set available in both the community-based and subscription-based variants.
Adopting SUSE Manager, or migrating to it, does not mean that you should necessarily renounce your previous configuration management systems. For instance, SUSE Manager can act as an External Node Qualifier (i.e., configuration database) for Puppet or Chef.

Figure 4: The SUSE Manager architecture – open standards and well-defined, open interfaces.

Salt on the Inside

SUSE supports the powerful Salt configuration management system. Salt is state-based. A client agent, known as a Salt “minion,” can find the Salt master without the need for additional configuration (Figure 5). If the client does not have an agent, Salt is capable of acting in “agentless” mode, sending Salt-equivalent commands through an SSH connection. The ability to operate in agent or agentless mode is an important benefit for a diverse network. SUSE Manager 4 also includes Salt-based functions for VM management that can manage hundreds of servers with near real-time efficiency.

SUSE Manager extends the automatic configuration capabilities of Salt through its support for action chains. An action chain makes it possible to use a single command to specify and then execute a complex task that consists of several steps. Examples of chainable actions include rebooting the system (even in between other configuration steps of the same system!), installing or updating software packages, and building system images.

Figure 5: SUSE Manager communicates transparently with both agent-enabled and agentless systems.

Comparing the Alternatives

SUSE Manager is one of several open source tools that inhabit the Linux space. Although the benefits of each depend on the details of your network and the needs of your organization, the following analysis offers a quick look at how SUSE Manager compares with the competition.

SUSE Manager vs. Puppet

The Puppet cross-platform orchestration tool comes in an open source version, as well as in a commercially supported Enterprise edition (which, however, is not entirely open source).

Traditionally, Puppet requires an agent on each client, which adds complexity and additional effort to configure and roll out for new systems. In the original Puppet working mode, changes are not implemented immediately, but only the next time the agent asks the server for an update – that is, after an interval configured by the administrator. Tools like Puppet Tasks and Puppet Bolt, which are included in the latest release(s) of Puppet Enterprise, overcome these limits, but they are still being integrated with the main product. The same applies to Puppet’s open source initiative for cloud orchestration with Kubernetes, called Lyra.

Puppet Enterprise’s native configuration directives require advanced knowledge of the custom Domain Specific Language (DSL). Support for the simpler, more widespread YAML language was added to Bolt in 2019.1 Many of the advanced Puppet features required for full functionality are found in additional modules, either from the official Puppet Forge website or from the larger Puppet Community. Interaction of modules from independent developers can add complication and lead to uncertainty or unpredictability in long-term support. Regardless of module issues, several advanced tasks still demand input from the command line, even in Puppet Enterprise.

Even Puppet’s support for managing bare metal, VMs, containers and cloud instances is more complex than in SUSE Manager, relying on the Razor component, which is included in the Enterprise version.

To summarize, Puppet offers less integration of crucial components, as well as a significantly higher learning curve than SUSE Manager. Puppet users will spend more time configuring the system in order to achieve an equivalent level of functionality.

SUSE Manager vs. Chef

Chef is a cross-platform, open source tool that is also available in a commercial version called Chef Automate. Like Puppet, Chef requires an agent on each node, and the “recipes” used to define client configurations require developer-level knowledge of Ruby-based DSL.

By default, a Chef installation requires an agent on each managed node. The configuration also requires a separate, dedicated machine (called the Chef Workstation). The purpose of the workstation is to host all the configuration recipes, which are first tested on the workstation and then pushed to the central Chef server. A Chef Workstation can apply configuration updates directly over SSH, and the Web interface of Chef Automate supports agentless compliance scans. However, seamless interaction of the Chef server, Chef Workstation and nodes is difficult to understand for beginners and requires a lot of initial setup and preliminary study.

Many of the advanced features required for a comparison with SUSE Manager are only available in the commercial Chef Automate edition. For instance, separate tools for compliance management (InSpec) and application management (Habitat) are only integrated in the commercial version of Chef, whereas these capabilities are fully integrated into the basic, upstream version of SUSE Manager.

SUSE Manager vs. Ansible Automation

The Ansible management tool puts the emphasis on simplicity. Ansible is best suited for small and relatively simple infrastructures. Part of Ansible’s simplicity is that, unlike other similar products, it has no notion of state and does not track dependencies. The tool simply executes a series of tasks and stops whenever it fails or encounters an error. When the administrator provides a playbook (a series of tasks to execute) to Ansible, Ansible compiles it and uses SSH to send the commands to the computers under its control, one at a time. In small organizations, the performance impact is typically unnoticed, but as the size of the network increases, performance can degrade, and in some cases, commands or upgrades may fail.

In general, this stateless design makes it more difficult for Ansible to execute complex assignments and automation steps. Ansible’s playbooks are easier to create and implement than the DSL rules used with Puppet or Chef, but the YAML markup language used with Ansible is not as versatile. And, although Ansible is written in Python, it does not offer a Python API to support advanced customization and interaction with other products. Ansible also does not provide compliance management or a central directory of the systems it manages. The community-driven AWX open source project provides a web interface to Ansible, which is not as mature as those of SUSE Manager.

The commercial version of Ansible, called Ansible Automation, is composed of Ansible Engine and Ansible Tower. Ansible Engine, which is the component that directly acts on the managed systems, is a direct descendant of the open source version, with the same agentless/YAML-based architecture. Ansible Tower is a web-based management interface for Ansible Engine based on selected versions of AWX, hardened for long-term supportability and able to integrate with other services. Some features of Ansible Tower are not available under open source licenses.

SUSE Manager and SaltStack

The SaltStack orchestration and configuration tool comes in an open source edition, as well as through the SaltStack Enterprise commercial version. Like SUSE Manager, SaltStack uses the Salt configuration engine for managing installation and configuration services.

SUSE Manager offers many more features than the open source version of SaltStack. For instance, SUSE Manager supports both state definition and dynamic assignment of configuration via groups through the web interface, as well as offering auditing and compliance features that aren’t available in the open source SaltStack edition.

Like SUSE Manager, SaltStack Enterprise is an enterprise-level management tool based on the Salt configuration engine. In many ways, SaltStack Enterprise is the most similar to SUSE Manager of all the tools described in the paper, so the choice might depend on the details of your environment.

Users who prefer to operate from the command line might prefer SUSE Manager because of its sophisticated command-line interface. And of course, networks with a large investment in SUSE Linux will appreciate SUSE Manager’s tight integration with the SUSE environment. SUSE Manager is also a better choice if your organization depends on SAP/HANA business services.

In other cases, the choice between SUSE Manager and SaltStack Enterprise might depend on cost, the size of your network, what Web interface best matches your workflow and other factors. Keep in mind that SUSE Linux is an ideal platform for supporting SaltStack Enterprise. SaltStack Enterprise is meant to serve as a master of Salt masters, a role that doesn’t conflict with SUSE Manager, so it is very possible for the tools to coexist without conflict.

If you are using SaltStack now and wish to continue to use it, the experts at SUSE can help you with a plan for how to integrate SaltStack with SUSE Manager and the SUSE Linux environment.

Conclusion

SUSE Manager provides a single, full-featured interface for managing and monitoring the whole lifecycle of Linux systems in a diverse network environment, either from an easy graphical interface (Figure 6) or from the command line. You can manage bare metal, virtual systems and container-based systems within the same convenient tool, attending to tasks such as deployment, provisioning, software updates, security auditing and configuration management. The flexible Salt configuration system allows convenient configuration definition and easy automation, and it is capable of acting in agent or agentless mode. In these ways, SUSE Manager greatly reduces the complexity and risks of dealing with highly dynamic Linux infrastructures and operations.

Unlike several of its competitors, SUSE offers the full feature set of SUSE Manager through its upstream, community-based development project Uyuni, thus preventing lock-in, simplifying evaluation and maximizing the benefits of open source development.

Strong support for customization and complex configurations, along with the ease and convenience of a single-source management solution, make SUSE Manager a powerful option for managing Linux systems in a diverse, enterprise environment.

For customers using SAP, dedicated “Formulas with Forms,” together with a new user interface and API, allow easy configuration of SAP HANA nodes, as well as simpler deployment of patch staging environments, without the need for custom scripting.

Talk to the experts at SUSE for more on how you can scale down overhead and scale up efficiency by adding SUSE Manager to your Linux network environment.

Figure 6: The complete status of the network and all the functions to manage its whole lifecycle, in the main panel of SUSE Manager.

Transforming Telematics with Kubernetes and Rancher

Wednesday, 11 March, 2020

“As we extend our leadership position in Europe, it’s never been more important to put containers at the heart of our growth strategy. The flexibility and scale that Rancher brings is the obvious solution for high-growth companies like ours.” – Thomas Ornell, IT Infrastructure Engineer, ABAX

Norwegian leader in fleet management, equipment and vehicle tracking, ABAX is one of Europe’s fastest-growing technology businesses. The company provides sophisticated fleet tracking, electronic mileage logs and equipment and vehicle control systems to more than 26,500 customers. ABAX manages over 250,000 active subscriptions that connect a variety of vehicles and industrial equipment subscriptions.

The team recently signed an international deal with Hitachi to provide operational monitoring in Hitachi heavy machinery to help owners access operational data. ABAX saves customers millions of dollars every year by preventing the loss and theft of valuable machinery and equipment through granular monitoring of corporate fleet performance.

Thomas Ornell, an IT infrastructure engineer, has been priming ABAX’ infrastructure for significant growth over the past couple of years. Ornell and his team have transformed the company’s innovation strategy, putting containers — and Rancher — at the heart of bold expansion plans. Read our case study to find out how, with Rancher, ABAX is reducing testing time by 75 percent and recovery time by 90 percent.

Looking at how to get the most out of your Kubernetes deployments? Download our White Paper, How to Build an Enterprise Kubernetes Strategy.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed