SUSE and Rancher – Enabling our Customers to Innovate Everywhere

Tuesday, 1 December, 2020

In July, I announced SUSE’s intent to acquire Rancher Labs, and now that the acquisition is final, today we embark on a new journey with SUSE. I couldn’t be more excited about our future and what this means for our customers around the world.

Just as Rancher made computing everywhere a possibility for our customers, with SUSE, we will empower our customers to innovate everywhere. Together we will offer our customers possibilities that know no limitations from the data center to the cloud, to the edge and beyond. This is our purpose; this is our mission.

Only our combined company can make this a reality by combining SUSE’s market leadership in powering mission-critical business applications and systems with Rancher’s market-leading Kubernetes management platform. Our independent approach puts the “open” back into open source software, giving our customers the agility to tackle their innovation challenges today and the freedom to evolve their strategy and solutions for tomorrow.

Since we announced the acquisition, I have been humbled by the countless emails and calls that I have received from our customers, partners, members of the open source community and, of course, our Rancher team members. They remain just as passionate about Rancher and are even more excited about our future with SUSE. Our customers worldwide can expect the same innovation that they have come to love from Rancher, now paired with SUSE’s stability and rock-solid IT infrastructure. This will further strengthen the bond of trust that we have created with our customers.

Here’s how we will bring this vision to life:

Customers

SUSE and Rancher customers can expect their existing investments and product subscriptions to remain in full force and effect according to their terms. Additionally, the delivery of future versions of SUSE’s CaaS Platform will be based on the innovative capabilities provided by Rancher. We will work with CaaS customers to ensure a smooth migration. Going forward, we will double down on our strengths in the areas of security, compliance, governance and broad application certification. A combined SUSE and Rancher provides the only enterprise Kubernetes platform that manages all of the world’s Kubernetes distros, regardless of what underlying Linux distro they use and whether they run in public clouds, private data centers or edge computing environments.

Partners

SUSE One partners will benefit from SUSE’s increased portfolio with Rancher solutions as they will help you close opportunities where your customers want to reimagine the way they manage and scale workloads consistently, monitor the health of their clusters and simplify the deployment and management of container applications.

I invite all Rancher partners to join SUSE’s One Partner Program. You can learn more during this webinar.

Open Source Community

I mentioned it earlier, but SUSE and Rancher remain fully committed to the open source community. We will continue contributing to upstream open source projects. This will not change. Together, as one company, we will continue providing true 100 percent open source solutions to global customers.

Don’t just take my word for it. See what our customers and partners are saying in Forbes.

Our future with SUSE is so bright – this is just the start of an incredible journey.

Join us on December 16 for Innovate Everywhere: How Kubernetes is Reshaping Enterprises. This webinar features Shannon Williams, Co-Founder, President and Chief Revenue Officer, Rancher Labs and Arun Chandrasekaran, Distinguished VP Analyst, Gartner.

Tags: ,,, Category: Rancher Blog Comments closed

Three Reasons Why Hosted Rancher Makes Your Life Easier

Thursday, 19 November, 2020

Today’s generation of makers, artists and creatives have reinforced the idea that great things can happen when you roll up your sleeves and try to learn something new and exciting. Kubernetes was like this only a couple of years ago: the mere act of installing the thing was a rewarding challenge. Kelsey Hightower’s Kubernetes the Hard Way became the Maker’s handbook for this artisan craft.

Fast forward to today and installing Kubernetes is no longer a noteworthy event. Its orchestration has become a commodity, and rightly so, as many engineers, software companies and the like swarmed to address this need by building robust tooling. Today’s Maker has far more interesting problems to solve up the stack, and so they expect Kubernetes to be able to summon a cluster on demand whenever they need it. For this reason and others, we created the same solution for Rancher, the multi-cluster Kubernetes management system. If I can create Kubernetes in one click in any cloud provider, why not my Rancher control plane? Enter Hosted Rancher.

Hosted Rancher is a fully managed, cloud-based instance of Rancher server. You don’t need to maintain a separate Kubernetes cluster, install the Rancher application or deal with upgrades. You retain all the control and ownership of your downstream Kubernetes clusters just like the on-prem Rancher experience today. When you combine Hosted Rancher with any of the popular cloud-managed Kubernetes offerings such as GKE, EKS or AKS, you now have an almost zero-touch Kubernetes infrastructure. Hosted Rancher is ideal for organizations that are looking to expedite their time to value by focusing their time on application adoption and empowering developers to use these new tools. After all, if you don’t have any applications using Kubernetes, it won’t matter how well your platform is maintained.

If you haven’t considered Hosted Rancher yet, here are three reasons why it might benefit you and your organization:

Increased Business Continuity

Operating Rancher isn’t rocket science, but it does require some ongoing expertise to safely maintain, back up and especially upgrade without causing downtime. Our core engineering team lives and breathes this stuff (they built Rancher, after all), so why not leverage their talent as a failsafe partnership with your staff?

Reduced Costs

TCO (Total Cost of Ownership) is a bit of a buzzword, but it becomes a reality at the end of the fiscal year when you start looking at actual spend to operate something. When you factor in the cost of cloud or on-premise infrastructure and staff expense to operate these servers and manage the Rancher application, it’s quite likely much more expensive than our Hosted offering.

Increased Adoption

This benefit might be the most subtle, but I guarantee it is the most meaningful. Contrary to popular belief, the mission of Rancher Labs is not just to help people operate Rancher. Our mission is to help people operate and therefore realize the benefits of Kubernetes in their software development lifecycle.

This is the “interesting” part of the problem space for every company out there: “How do I harness the value of Kubernetes for my applications?” The sooner we can get past the table stakes concerns of implementing and operating Kubernetes and Rancher, the sooner we can focus on this most paramount issue of Kubernetes adoption. Hosted Rancher simply removes one hurdle from the racetrack. With support from Rancher’s Customer Success team focusing on user adoption, your teams are able to accelerate their Kubernetes journey quickly without compromising on performance and resource inefficiency.

Image 01

Next Steps

I hope I’ve provided some insight that will help your journey in the Kubernetes and cloud-native world. To learn more about Hosted Rancher, check out our technical guide or contact the Rancher team. Until next time!

Introducing Rancher on NetApp HCI: Hybrid Cloud Multicluster Kubernetes Management with Push-Button Ease

Tuesday, 17 November, 2020

If you’re like me and have been watching the odd purchasing trends due to the pandemic, you probably remember when all the hair clippers were sold out — and then flour and yeast. Most recently, you might have seen this headline: Tupperware profits and shares soar as more people are eating at home during the pandemic. Tupperware is finally having its day. But a Tupperware stacking strategy is probably not why you’re here. Don’t worry, this isn’t your grandma’s container strategy — no Tupperware stacking required. You’re probably here because, like most organizations today, you need to be able to quickly release and update applications when and where you want to.

Today we’re excited to announce a partnership between NetApp and Rancher to bring multicluster Kubernetes management on premises with NetApp® HCI. Now you can deploy Rancher with push-button ease from NetApp HCI’s management plane, the NetApp Hybrid Cloud Control manageability suite.

Why NetApp + Rancher?

It’s no secret that Kubernetes in the enterprise is becoming more mainstream. If your organization hasn’t already moved toward containers, it will soon. But this shift isn’t without growing pains.

IT faces challenges with multiple team-specific Kubernetes deployments, decentralized governance, and lack of consistency among inherited Kubernetes clusters.Now, with Kubernetes adoption on the upswing, IT is expected to do the deployments, which can be time consuming for teams that are unfamiliar with Kubernetes. IT teams are managing their stakeholders’ different technology stack preferences and requirements while focusing on scalability and stability in production.

On the other hand, DevOps teams want the latest modern development tooling. They need to maintain control and flexibility over their clusters on infrastructure that is on demand and hassle free. These teams are all over continuous integration and continuous deployment (CI/CD) and DevOps automation. Their primary concerns are around agility and time to value.

The partnership between NetApp and Rancher addresses the challenges of both IT and the DevOps teams that they support. NetApp HCI delivers solid performance at scale for production environments. Rancher delivers modern cloud-native tooling for DevOps. Together, they create the easiest way for IT to get going with Kubernetes, enabling centralized management of multiple clusters, both new and existing. The combination of the two technologies delivers a true hybrid cloud Kubernetes orchestration layer on a modern DevOps cloud-native platform.

How We Integrated Rancher into NetApp HCI

We integrated Rancher directly into the NetApp HCI UI for a seamless experience. On top of NetApp HCI’s highly scalable, private cloud technology , the management plane where you can go to add a node or upgrade your firmware. We’ve added a button to deploy Rancher directly from Hybrid Cloud Control.

Image 01
Image 02

With push-button ease, you’ll have the Rancher management cluster running on VMware (NetAp HCI is a VMware-based appliance). Your hybrid cloud and multicloud Kubernetes management plane is ready to go.

Feature Applicability Benefit
Deployment from Hybrid Cloud Control Rancher management cluster Fastest way to get IT going with supporting DevOps-ready Kubernetes
Lifecycle management from Hybrid Cloud Control Rancher management cluster Push–button updates for Rancher server and supporting infrastructure
Node template User clusters deployed from Rancher Simplifies creation of user clusters deployed to NetApp HCI
NetApp Trident in Rancher catalog User clusters deployed from Rancher Simplifies persistent volumes from NetApp HCI storage nodes for user clusters

Rancher, as open source, is free to deploy and use, but Rancher enterprise support is available if you need it. Try out Rancher on NetApp HCI at no additional cost; think of it as an indefinite trial period. If you want support later, you can purchase it from NetApp. NetApp provides joint support with Rancher, so you can file support tickets for Rancher directly with NetApp.

A Win-Win for IT Operations and DevOps

With Rancher on NetApp HCI, both IT operations and DevOps teams benefit. Your IT operations teams can centrally provision Kubernetes services while maintaining control and visibility of all clusters, resources, and security. The provisioned services can then be used by your DevOps teams to efficiently build, deploy, and manage full-featured containerized applications. In the end, IT gets what it needs, DevOps gets what it needs, and your organization attains the key benefits of a successful Kubernetes strategy.

Learn More

For more information about NetApp HCI and Rancher, visit Simplify multicluster Kubernetes management with Rancher on NetApp HCI.

Monitor Distributed Microservices with AppDynamics and Rancher

Friday, 6 November, 2020
Discover what’s new in Rancher 2.5

Kubernetes is increasingly becoming a uniform standard for computing – in Edge, in core and in the cloud. At NTS, we recognize this trend and have been systematically building up competencies for this core technology since 2018. As a technically-oriented business, we regularly validate different Kubernetes platforms and we share the view of many analysts (e.g. Forrester or Gartner and Gartner Hype Cycle Reports) that Rancher Labs ranks among the leading players in this sector. In fact, five of our employees are Rancher certified through Rancher Academy, to maintain a close and sustainable partnership – with the best possible customer support entirely based on the premise “Relax, we care.”

Application Performance Monitoring with AppDynamics

Kubernetes is the ideal platform to create platforms and to operate a modern infrastructure. But often, Kubernetes alone is not sufficient. Understanding the application and its requirements is necessary above all – and that’s where our partnership with Rancher comes in.

The conversion to a container-based landscape carries a risk that can be minimized with comprehensive monitoring, which includes not only the infrastructure, such as vCenter, server, storage or Load Balancer, but also the business process.

To serve this sector, we have developed competencies in the area of Application Performance Monitoring (APM) and partnered with AppDynamics. Once again, we agree with analysts such as Gartner that AppDynamics is a leader in this space. We’ve achieved AppDynamics Pioneer partner status in a short amount of time thanks to our certified engineers.

Why Monitor Kubernetes with AppDynamics?

In distributed environments, it’s easy to lose track of things using containers (they do even need to be microservices). Maintaining an overview is not a simple task, but it is absolutely necessary.

We’re seeing a huge proliferation of containers. Previously there were a few “large rocks” – the virtual machines (VMs). These large rocks are the monoliths from conventional applications. In containerized environments, fundamental topics change as well. In a monolith, “process calls” within an application happen in the same VM, within the same application. With containers, they happen via networks and APIs or Service Meshes.

An optimally instrumented APM is absolutely necessary for the operation of critical applications, which are a direct contributor to the added value of a company and to the business process.

To address this need, NTS created an integration between AppDynamics and Rancher Labs. Our goal for the integration was to maintain an overview as well and to minimize the potential risk for the user/customer. In this blog post, we’ll describe the integration and show you how it works.

Integration Description

AppDynamics supports “full stack” monitoring from the application to the infrastructure. Rancher provides a modern platform for Kubernetes “everywhere” (edge, core, cloud). We have designed a tool to simplify monitoring of Kubernetes clusters and created a Rancher chart that is based on a Helm (a package manager for Kubernetes) that is available to all Rancher users in the App Catalog.

Image 01

Now we’ll show how simple it is to monitor Rancher Kubernetes clusters with AppDynamics.

Prerequisites

  • Rancher management server (Rancher)
  • Kubernetes cluster with version > = 1.13
    • On premises (e.g. based on VMware vSphere)
    • or in the public cloud (e.g. based on Microsoft Azure AKS)
  • AppDynamics controller/account (free trial available)

Deploying AppDynamics Cluster Agents

The AppDynamics cluster agent for Kubernetes is a Docker image that is maintained by AppDynamics. The deployment of the cluster agents is largely simplified and automated by our Rancher chart. Therefore, virtually any number of Kubernetes clusters can be prepared for monitoring with AppDynamics at the touch of a button. This is an essential advantage in case of distributed applications.

We conducted our deployment in an NTS Rancher test environment. To begin, we log into the Rancher Web interface:

Image 02

Next, we choose Apps in in the top navigation bar:

Image 03

Then we click Launch:

Image 04

Now, Rancher shows us the available applications. We choose appdynamics-cluster-agent:

Image 05

Next, we deploy the AppDynamics cluster agent:

Image 06

Next, choose the target Kubernetes cluster – in our case, it’s “netapp-trident.”

Image 07

Then specify the details of the AppDynamics controller:

Image 08

You can also set agent parameters via the Rancher chart.

Image 09

Finally, click Launch

Image 10

and Rancher will install the AppDynamics cluster agent in the target clusters:

Image 11

After a few minutes, we’ll see a successful deployment:

Image 12

Instrumentation of the AppDynamics Cluster Agent

After a few minutes, the deployed cluster agent shows up in the AppDynamics controller. To find it, select AdminAppDynamics AgentsCluster Agents:

Image 13

Now we “instrument” this agent (“to instrument” is the term for monitoring elements in AppD).
Choose your cluster and click Configure:

Image 14

Next, select the namespaces to monitor:

Image 15

And click Ok.

Now we’ve successfully instrumented the cluster agent.

After a few minutes (monitoring cycles), the cluster can be monitored in AppDynamics under ServersCluster:

Image 16

Kubernetes Monitoring with AppDynamics

The following screen shots show the monitoring features of AppDynamics.

Image 17
Dashboard

Image 18
Pods

Image 19
Inventory

Image 20
Events

Conclusion

In this blog post, we’ve described the integration that NTS developed between Rancher and AppDynamics. Both partners have deployed this integration and plans are for it to continue. We’ve shown you how the integration works and described how AppDynamics, which is ideally suited for monitoring Kubernetes clusters, works so well with Rancher, which is great for managing your Kubernetes deployments. NTS offers expertise and know-how in the areas of Kubernetes and monitoring and we’re excited about the potential of these platforms working together to make Kubernetes easier to monitor and manage.

Discover what’s new in Rancher 2.5

Rancher 2.5 Keeps Customers Free from Kubernetes Lock-in

Wednesday, 21 October, 2020
Discover what’s new in Rancher 2.5

Rancher Labs has launched its much-anticipated Rancher version 2.5 into the cloud-native space, and we at LSD couldn’t be more excited. Before highlighting some of the new features, here is some context as to how we think Rancher is innovating.

Kubernetes has become one of the most important technologies adopted by companies in their quest to modernize. While the container orchestrator, a fundamental piece of the cloud-native journey, has many advantages, it can also be frustratingly complex and challenging to architect, build, manage and maintain. One of the considerations is the deployment architecture, which leads many companies to want to deploy a hybrid cloud solution often due to cost, redundancy and latency reasons. This is often on premises and multi cloud.

All of the cloud providers have created Kubernetes-based solutions — such as EKS on AWS, AKS on Azure and GKE on Google Cloud. Now businesses can adopt Kubernetes at a much faster rate with less effort, compared to their technical teams building Kubernetes internally. This sounds like a great solution — except for perhaps the reasons above: cost, redundancy and latency. Furthermore, we have noticed a trend of no longer being cloud native, but AWS native or Azure native. The tools and capabilities are vastly different from cloud to cloud, and they tend to create their own kind of lock-in.

The cloud has opened so many possibilities, and the ability to add a credit card and within minutes start testing your idea is fantastic. You don’t have to submit a request to IT or wait weeks for simple infrastructure. This has led to the rise of shadow IT, with many organizations bypassing the standards set out to protect the business.

We believe the new Rancher 2.5 release addresses both the needs for standards and security across a hybrid environment while enabling efficiency in just getting the job done.

Rancher has also released K3s, a highly available certified Kubernetes distribution designed for the edge. It supports production workloads in unattended, resource-constrained remote locations or inside IoT appliances.

Enter Rancher 2.5: Manage Kubernetes at Scale

Rancher enables organizations to manage Kubernetes at scale, whether on-premise or in the cloud, through a single pane of glass, providing for a consistent experience regardless of where your operations are happening. It also enables you to import existing Kubernetes clusters and centrally manage. Rancher has taken Kubernetes and beefed it up with the required components to make it a fantastic enterprise-grade container platform. These components include push-button platform upgrades, SDLC pipeline tooling, monitoring and logging, visualizing Kubernetes resources, service mesh, central authorization, RBAC and much more.

As good as that sounds, what is the value in unifying everything under a platform like Rancher? Right off the bat there are three obvious benefits:

  • Consistently deliver a high level of reliability on any infrastructure
  • Improve DevOps efficiency with standardized automation
  • Ensure enforcement of security policies on any infrastructure

Essentially, it means you don’t have to manage each Kubernetes cluster independently. You have a central point of visibility across all clusters and an easier time with security policies across the different platforms.

Get More Value out of Amazon EKS

With the release of Rancher 2.5, enhancements enhanced of the EKS platform support means that you can now derive even more value from your existing EKS clusters, including the following features:

  • Enhanced EKS cluster import, keeping your existing cluster intact. Simply import it and let Rancher start managing your clusters, enabling all the benefits of Rancher.
  • New enhanced configuration of the underlying infrastructure for Rancher 2.5, making it much simpler to manage.
  • New Rancher cluster-level UX explores all available Kubernetes resources
  • From an observability perspective, Rancher 2.5 comes with enhanced support for Prometheus (for monitoring) and Fluentd/Fluentbit (for logging)
  • Istio is a service mesh that lets you connect, secure, control and observe services. It controls the flow of traffic and API calls between services and adds a layer of security through managed authentication and encryption. Rancher now fully supports Istio.
  • A constant risk highlighted with containers is security. Rancher 2.5 now includes CIS Scanning of container images. It also includes an OPA Gatekeeper (open policy agent) to describe and enforce policies. Every organization has policies; some are essential to meet governance and legal requirements, while others help ensure adherence to best practices and institutional conventions. Gatekeeper lets you automate policy enforcement to ensure consistency and allows your developers to operate independently without having to worry about compliance.

Conclusion

In our opinion, Rancher has done a spectacular job with the new additions in 2.5 by addressing critical areas that are important to customers. They have also shown that you absolutely can get the best of both EKS and fully-supported features.

LSD was founded in 2001 and wants to inspire the world by embracing open philosophy and technology, empowering people to be their authentic best selves, all while having fun. Specializing in containers and cloud native, the company aims to digitally accelerate clients through a framework called the LSDTrip. To learn more about the LSDTrip, visit us or email us.

Discover what’s new in Rancher 2.5

Gain Better Visibility into Kubernetes Cost Allocation

Wednesday, 30 September, 2020
Join The Master Class: Kubernetes Cost Allocation and Visibility, Tuesday, October 13 at 2pm ET

The Complexity of Measuring Kubernetes Costs

Adopting Kubernetes and service-based architecture can bring many benefits to organizations – teams move faster and applications scale more easily. However, visibility into cloud costs is made more complicated with this transition. This is because applications and their resource needs are often dynamic, and teams share core resources without transparent prices attached to workloads. Additionally, organizations that realize the full benefit of Kubernetes often run resources on disparate machine types and even multiple cloud providers. In this blog post, we’ll look at best practices and different approaches for implementing cost monitoring in your organization for a showback/chargeback program, and how to empower users to act on this information. We’ll also look at Kubecost, which provides an open source approach for ensuring consistent and accurate visibility across all Kubernetes workloads.

Image 01
A common Kubernetes setup with team workloads spread across Kubernetes nodes and clusters

Let’s look further into best practices for accurately allocating and monitoring Kubernetes workload costs as well as spend on related managed services.

Cost Allocation

Accurately allocating resource costs is the first critical step to creating great cost visibility and achieving high cost efficiency within a Kubernetes environment.

To correctly do this, you need to allocate costs at the workload level, by individual container. Once workload allocation is complete, costs can be correctly assigned to teams, departments or even individual developers by aggregating different collections of workloads. One framework for allocating cost at the workload level is as follows:

Image 02

Let’s break this down a bit.

The average amount of resources consumed is measured by the Kubernetes scheduler or by the amount provisioned from a cloud provider, depending on the particular resource being measured. We recommend measuring memory and CPU allocation by the maximum of request and usage. Using this methodology reflects the amount of resources reserved by the Kubernetes scheduler itself. On the other hand, resources like load balancers and persistent volumes are strictly based on the amount provisioned from a provider.

The Kubernetes API can directly measure the period of time a resource is consumed. This is determined by the amount of time spent in a Running state for resources like memory, CPU and GPU. To have numbers that are accurate enough for cloud chargeback, we recommend that teams reconcile this data with the amount of time a particular cloud resource, such as a node, was provisioned by a cloud provider. More on this in the section below.

Resource prices are determined by observing the cost of each particular resource in your environment. For example, the price of a CPU hour on a m5.xlarge spot instance in us-east-1 AWS zone will be different than the on-demand price for that same instance.

Once costs are appropriately allocated across individual workloads with this framework, they can then be easily aggregated by any Kubernetes concept, such as namespace, label, annotation or controller.

Kubernetes Cost Monitoring

With costs allocated by Kubernetes concept (pod or controller) you can begin to accurately map spend to any internal business concept, such as team, product, department or cost center. It’s common practice for organizations to segment team workloads by Kubernetes namespace, whereas others may use concepts like Kubernetes labels or annotations to identify which team a workload belongs to.

Another key element for cost monitoring across different applications, teams, etc. is determining who should pay for idle or slack capacity. This specifically refers to unused cluster resources that are still being billed to your company. Often these are either billed to a central infrastructure cost center or distributed proportionally to application teams. Assigning these costs to the team(s) responsible for provisioning decisions has shown to have positive results by aligning the incentive to have an efficiently sized cluster.

Reconciling to Cloud Bill

Kubernetes provides a wealth of real-time data. This can be used to give developers access to immediate cost metrics. While this real-time data is often precise, it may not perfectly correspond to a cloud provider’s billing data. For example, when determining the hourly rate of an AWS spot node, users need to wait on either the Spot data feed or the Cost & Usage Report to determine exact market rates. For billing and chargeback purposes, you should reconcile data to your actual bill.

Image 03

Get Better Visibility & Governance with Kubecost

We’ve looked at how you can directly observe data to calculate the cost of Kubernetes workloads. Another option is to leverage Kubecost, a cost and capacity management solution built on open source that provides visibility across Kubernetes environments. Kubecost provides cost visibility and insights across Kubernetes workloads as well as the related managed services they consume, such as S3 or RDS. This product collects real-time data from Kubernetes and also reconciles with your cloud billing data to reflect the actual prices you have paid.

Image 04
A Kubecost screenshot showing cost by Kubernetes cost by namespace

With a solution like Kubecost in place, you can empower application engineers to make informed real-time decisions and start to implement immediate and long-term practices to optimize and govern cloud spend. This includes adopting cost optimization insights without risking performance, implementing Kubernetes budgets and alerts, showback/chargeback programs or even cost-based automation.

Kubecost community version is available for free with all of these features described – and you can find the Kubecost Helm chart in the Rancher App Catalog. Rancher gives you broad visibility and control; Kubecost gives you direct insight into spend and how to optimize. Together they provide a complete cost management story for teams using Kubernetes. To learn more about how to gain visibility into your Kubernetes costs, join our Master Class on Kubernetes Cost Allocation and Visibility, Tuesday, October 13, at 2pm ET.

Join The Master Class: Kubernetes Cost Allocation and Visibility, Tuesday, October 13 at 2pm ET

Connecting the World’s Travel Trade with Kubernetes

Monday, 21 September, 2020

“We needed the flexibility to run any technologies side-by-side and a way to run clusters in multiple clouds, and a variety of environments – depending on customer needs. Rancher was the only realistic choice.” Juan Luis Sanfélix Loshuertos, IT Operations Manager – Compute & Storage, Hotelbeds

When you book a hotel online or with a travel agent, you’ve probably got a wish list that has to do with the size of the room, view, location and amenities. It’s likely you’re not thinking about the technology in the background that makes it all happen. That’s where Hotelbeds comes in. The business-to-business travel technology company operates a hotel distribution platform that travel agents, tour operators, airlines and loyalty programs use to book hotel rooms.

As the world’s leading “bedbank”, the Spanish company provides more than 180,000 hotel properties worldwide with access to distribution channels that significantly increase occupation rates. They give hoteliers access to a network of more than 60,000 hard-to-access B2B travel buyers such as tour operators, retail travel agents, airline websites and loyalty programs.

Hotelbeds attributes much of its success to a focus on technology innovation. One of the main roles of its technology teams is to experiment and validate technologies that make the business more competitive. With this innovation strategy and growing use of Kubernetes, the company is healthy, despite challenges in the hospitality industry.

The company’s initial infrastructure was an on-premise, VM-based environment. Moving to a cloud-native, microservices-centric environment was a goal, and by 2017 they began this transition. They started working with Amazon Web Services (AWS) and by 2018, had created a global cloud distribution, handling large local workloads all over the world. The technology transformation continued as they started moving applications into Docker containers to drive management and cost efficiencies.

Moving to Kubernetes and Finding a Management Tool

Then, with the groundswell behind Kubernetes, the Hotelbeds team knew moving to feature-rich platform was the next logical step. With that came the need for an orchestration solution that could support a mix of technologies both on-premise and in the cloud. With many data centers and a proliferating cloud presence, the company also needed multi-cluster support. After exhaustive market analysis, Rancher emerged as the clear choice, with its ability to support a multi-cluster, multi-cloud and hybrid cloud/on-premise architecture.

After further testing with Rancher in non-critical data apps, Hotelbeds moved into production in 2020, running Kubernetes clusters both on-premise and in Google Cloud Platform and AWS. With Rancher, they reduced cloud migration time by 90 percent and reduced cluster deployment time by 80 percent.

Read our case study to hear how Rancher gives Hotelbeds the flexibility to manage deployments across AWS regions while scaling on-premise clusters 90 percent faster at 35 percent less cost.

Is Kubernetes Delivering on its Promise?

Monday, 14 September, 2020
Don’t miss the Computing on the Edge with Kubernetes conference on October 21.

A headline in a recent Register article jumped off my screen with the claim: “No, Kubernetes doesn’t make applications portable, say analysts. Good luck avoiding lock-in, too.” Well, that certainly got my attention…for a couple of reasons. First, the emphasis on an absolute claim was quite literally shouting at me. In my experience, absolutes are rare occurrences in software engineering. Second, it was nearly impossible to imagine what evidence this conclusion was based on.

The article summarizes a report by Gartner analysts Marco Meinardi, Richard Watson and Alan Waite. I can understand if skepticism drove their conclusions – let’s acknowledge that software vendors often enjoy developing software that creates lock-in. So, if this is a reaction to past experience, I get that.

But when I look carefully at their arguments, I find the stance less compelling. Let’s look at some of their arguments in more detail:

  1. “Using Kubernetes to minimize provider lock-in is an attractive idea, but such abstraction layer simply becomes an alternative point of lock-in.”

I can certainly agree with this point. The net product of Kubernetes is that it does create a dependence on the abstraction layer itself. This is inherent to the nature of abstractions, or interfaces in general. The conclusions the authors draw in this next claim is what I think is problematic:

  1. “Although abstraction layers may be attractive for portability, they do not surface completely identical functionality from the underlying services — they often mask or distort them.”

This statement misses the point. It’s probably true that abstraction layers do not achieve “completely identical functionality,” but this is not in question.

An abstraction layer’s virtue is not that it is 100 percent accurate or perfect, but that it sufficiently handles the majority of cases identically. I would put this claim in context that perfection is not a requirement for Kubernetes to be beneficial.

Even if Kubernetes provided portability for 80 percent of use cases, that is still far better than the status quo (building with complete dependence on a traditional cloud provider). There you have very little portability. And especially for net-new projects, where you absorb the cost of building for either IaaS or Kubernetes, why not take one that offers you 80 percent more upside? This claim fails to understand Kubernetes’ value position.

  1. “The more specific to a provider a compute instance is, the less likely it is to be portable in any way. For example, using EKS on [AWS] Fargate is not CNCF-certified and arguably not even standard Kubernetes. The same is true for virtual nodes on Azure as implemented by ACIs.”

I suspect this claim is used to support the previous one that the abstraction layer is not consistent and therefore fails at providing its intended value. The problem here is that the examples are results of specific approaches to Kubernetes and not inherent to Kubernetes itself. That is, Kubernetes’ design or approach does not produce situations where there are incompatible implementations. Instead, these are the result of specific vendors making implementation choices to break compatibility. And in many cases, innovation sometimes requires trying something new, but it doesn’t mean it is the only option. In fact, there 32 other conforming Kubernetes distributions to choose from that won’t have compatibility issues. Therefore, the authors selecting a handful of the most extreme examples is not an accurate reflection of the CNCF ecosystem.

Like I said earlier, I can certainly sympathize with there being many examples of “new platforms” that claim to provide freedom but, in fact, do not. Yet we can’t let experiences taint our ability to try new things in technology. Kubernetes isn’t perfect. It’s not the solution to all engineering problems, nor is it a tool everyone should use. But in my career as a Site Reliability Engineer and a consultant, I’ve seen first-hand real improvements over previous technologies that offer measurable value to engineering teams and the businesses that depend on them.

Avoid Kubernetes Lock-In With Rancher

At Rancher Labs, we base our business model on the idea of avoiding lock-in – and we really preach this doctrine. You might find this statement curious because I just pointed out that vendors often do the opposite. So, the obvious question is: why is Rancher any different? Well, I can answer that, but I suspect you’ll get a better answer by investigating that yourself. Talk to our customers, look at our software – which is all open source and non-proprietary. I suspect you’ll find that Rancher is in business because we continue to provide a valuable experience, not because a customer has no other option. And organizations like the CNCF keep us accountable by measuring both our Kubernetes distributions (K3s and RKE) against a rigorous conformance test. But most importantly, our customers keep us accountable, because they elect every year to keep us in business or not. It’s not the easiest business to be in, but it certainly is the most rewarding.

Don’t miss the Computing on the Edge with Kubernetes conference on October 21.
Tags: ,,,, Category: Uncategorized Comments closed

Integrate AWS Services into Rancher Workloads with TriggerMesh

Wednesday, 9 September, 2020
Don’t miss the Computing on the Edge with Kubernetes conference on October 21.

Many businesses use cloud services on AWS and also run workloads on Kubernetes and Knative. Today, it’s difficult to integrate events from AWS to workloads on a Rancher cluster, preventing you from taking full advantage of your data and applications. To trigger a workload on Rancher when events happen in your AWS service, you need an event source that can consume AWS events and send them to your Rancher workload.

TriggerMesh Sources for Amazon Web Services (SAWS) are event sources for AWS services. Now available in the Rancher catalog, SAWS allows you to quickly and easily consume events from your AWS services and send them to your workloads running in your Rancher clusters.

SAWS currently provides event sources for the following Amazon Web Services:

TriggerMesh SAWS is open source software that you can use in any Kubernetes cluster with Knative installed. In this blog post, we’ll walk through installing SAWS in your Rancher cluster and demonstrate how to consume Amazon SQS events in your Knative workload.

Getting Started

To get you started, we’ll walk you through installing SAWS in your Rancher cluster, followed by a quick demonstration of consuming Amazon SQS events in your Knative workload.

SAWS Installation

  1. TriggerMesh SAWS requires the Knative serving component. Follow the Knative documentation to install the Knative serving component in your Kubernetes cluster. Optionally, you may also install the Knative eventing component for the complete Knative experience. We used:
    kubectl --namespace kong get service kong-proxy

    We created a cluster from the GKE provider. A LoadBalancer service will be assigned an external IP, which is necessary to access the service over the internet.

  2. With Knative serving installed, search for aws-event-sources from the Rancher applications catalog and install the latest available version from the helm3-library. You can install the chart at the Default namespace.

    Image 01

Remember to update the Knative Domain and Knative URL Scheme parameters during the chart installation. For example, in our demo cluster we used Magic DNS (xip.io) for configuring the DNS in the Knative serving installation step, so we specified 34.121.24.183.xip.io and http as the values of Knative Domain and Knative URL Scheme, respectively.

That’s it! Your cluster is now fully equipped with all the components to consume events from your AWS services.

Demonstration

To demonstrate the TriggerMesh SAWS package functionality, we will set up an Amazon SQS queue and visualize the queue events in a service running on our cluster. You’ll need to have access to the SQS service on AWS to create the queue. A specific role is not required. However, make sure you have all the permissions on the queue: see details here.

Step 1: Create SQS Queue

Image 02

Log in to the Amazon management console and create a queue.

Step 2: Create AWS Credentials Secret

Create a secret named awscreds containing your AWS credentials:

$ kubectl -n default create secret generic awscreds 

--from-literal=aws_access_key_id=AKIAIOSFODNN7EXAMPLE 

--from-literal=aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Update the values of aws_access_key_id and aws_secret_access_key in the above command.

Step 3: Create the AWSSQSSource Resource

Create the AWSSQSSource resource that will bring the events that occur on the SQS queue to the cluster using the following snippet. Remember to update the arn field in the snippet with that of your queue.

$ kubectl -n default create -f - << EOF

apiVersion: sources.triggermesh.io/v1alpha1

kind: AWSSQSSource

metadata:

name: my-queue

spec:

arn: arn:aws:sqs:us-east-1:043455440429:SAWSQueue

credentials:

  accessKeyID:

    valueFromSecret:

     name: awscreds

     key: aws_access_key_id

  secretAccessKey:

    valueFromSecret:

     name: awscreds

     key: aws_secret_access_key

sink:

 ref:

    apiVersion: v1

   kind: Service

   name: sockeye

EOF

Check the status of the resource using:

$ kubectl -n default get awssqssources.sources.triggermesh.io

NAME READY REASON SINK AGE

my-queue True http://sockeye.default.svc.cluster.local/ 3m19s

Step 4: Create Sockeye Service

Sockeye is a WebSocket-based CloudEvents viewer. Our my-queue resource created above is set up to send the cloud events to a service named sockeye as configured in the sink section. Create the sockeye service using the following snippet:

$ kubectl -n default create -f - << EOF

apiVersion: serving.knative.dev/v1

kind: Service

metadata:

name: sockeye

spec:

template:

 spec:

   containers:

    - image: docker.io/n3wscott/sockeye:v0.5.0@sha256:64c22fe8688a6bb2b44854a07b0a2e1ad021cd7ec52a377a6b135afed5e9f5d2

EOF

Next, get the URL of the sockeye service and load it in the web browser.

$ kubectl -n default get ksvc

NAME URL LATESTCREATED LATESTREADY READY REASON

sockeye http://sockeye.default.34.121.24.183.xip.io sockeye-fs6d6 sockeye-fs6d6 True

Step 5: Send Messages to the Queue

We now have all the components set up. All we need to do is to send messages to the SQS queue.

Image 03

The cloud events should appear in the sockeye events viewer.

Image 04

Conclusion

As you can see, using TriggerMesh Sources for AWS makes it easy to consume cloud events that occur in AWS services. Our example uses Sockeye for demonstration purposes: you can replace Sockeye with any of your Kubernetes workloads that would benefit from consuming and processing events from these popular AWS services.

The TriggerMesh SAWS package supports a number of AWS services. Refer to the README for each component to learn more. You can find sample configurations here.

Don’t miss the Computing on the Edge with Kubernetes conference on October 21.
Tags: ,,, Category: Products, Rancher Kubernetes Comments closed