Integrate AWS Services into Rancher Workloads with TriggerMesh

星期三, 9 九月, 2020
Don’t miss the Computing on the Edge with Kubernetes conference on October 21.

Many businesses use cloud services on AWS and also run workloads on Kubernetes and Knative. Today, it’s difficult to integrate events from AWS to workloads on a Rancher cluster, preventing you from taking full advantage of your data and applications. To trigger a workload on Rancher when events happen in your AWS service, you need an event source that can consume AWS events and send them to your Rancher workload.

TriggerMesh Sources for Amazon Web Services (SAWS) are event sources for AWS services. Now available in the Rancher catalog, SAWS allows you to quickly and easily consume events from your AWS services and send them to your workloads running in your Rancher clusters.

SAWS currently provides event sources for the following Amazon Web Services:

TriggerMesh SAWS is open source software that you can use in any Kubernetes cluster with Knative installed. In this blog post, we’ll walk through installing SAWS in your Rancher cluster and demonstrate how to consume Amazon SQS events in your Knative workload.

Getting Started

To get you started, we’ll walk you through installing SAWS in your Rancher cluster, followed by a quick demonstration of consuming Amazon SQS events in your Knative workload.

SAWS Installation

  1. TriggerMesh SAWS requires the Knative serving component. Follow the Knative documentation to install the Knative serving component in your Kubernetes cluster. Optionally, you may also install the Knative eventing component for the complete Knative experience. We used:
    kubectl --namespace kong get service kong-proxy

    We created a cluster from the GKE provider. A LoadBalancer service will be assigned an external IP, which is necessary to access the service over the internet.

  2. With Knative serving installed, search for aws-event-sources from the Rancher applications catalog and install the latest available version from the helm3-library. You can install the chart at the Default namespace.

    Image 01

Remember to update the Knative Domain and Knative URL Scheme parameters during the chart installation. For example, in our demo cluster we used Magic DNS (xip.io) for configuring the DNS in the Knative serving installation step, so we specified 34.121.24.183.xip.io and http as the values of Knative Domain and Knative URL Scheme, respectively.

That’s it! Your cluster is now fully equipped with all the components to consume events from your AWS services.

Demonstration

To demonstrate the TriggerMesh SAWS package functionality, we will set up an Amazon SQS queue and visualize the queue events in a service running on our cluster. You’ll need to have access to the SQS service on AWS to create the queue. A specific role is not required. However, make sure you have all the permissions on the queue: see details here.

Step 1: Create SQS Queue

Image 02

Log in to the Amazon management console and create a queue.

Step 2: Create AWS Credentials Secret

Create a secret named awscreds containing your AWS credentials:

$ kubectl -n default create secret generic awscreds 

--from-literal=aws_access_key_id=AKIAIOSFODNN7EXAMPLE 

--from-literal=aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Update the values of aws_access_key_id and aws_secret_access_key in the above command.

Step 3: Create the AWSSQSSource Resource

Create the AWSSQSSource resource that will bring the events that occur on the SQS queue to the cluster using the following snippet. Remember to update the arn field in the snippet with that of your queue.

$ kubectl -n default create -f - << EOF

apiVersion: sources.triggermesh.io/v1alpha1

kind: AWSSQSSource

metadata:

name: my-queue

spec:

arn: arn:aws:sqs:us-east-1:043455440429:SAWSQueue

credentials:

  accessKeyID:

    valueFromSecret:

     name: awscreds

     key: aws_access_key_id

  secretAccessKey:

    valueFromSecret:

     name: awscreds

     key: aws_secret_access_key

sink:

 ref:

    apiVersion: v1

   kind: Service

   name: sockeye

EOF

Check the status of the resource using:

$ kubectl -n default get awssqssources.sources.triggermesh.io

NAME READY REASON SINK AGE

my-queue True http://sockeye.default.svc.cluster.local/ 3m19s

Step 4: Create Sockeye Service

Sockeye is a WebSocket-based CloudEvents viewer. Our my-queue resource created above is set up to send the cloud events to a service named sockeye as configured in the sink section. Create the sockeye service using the following snippet:

$ kubectl -n default create -f - << EOF

apiVersion: serving.knative.dev/v1

kind: Service

metadata:

name: sockeye

spec:

template:

 spec:

   containers:

    - image: docker.io/n3wscott/sockeye:v0.5.0@sha256:64c22fe8688a6bb2b44854a07b0a2e1ad021cd7ec52a377a6b135afed5e9f5d2

EOF

Next, get the URL of the sockeye service and load it in the web browser.

$ kubectl -n default get ksvc

NAME URL LATESTCREATED LATESTREADY READY REASON

sockeye http://sockeye.default.34.121.24.183.xip.io sockeye-fs6d6 sockeye-fs6d6 True

Step 5: Send Messages to the Queue

We now have all the components set up. All we need to do is to send messages to the SQS queue.

Image 03

The cloud events should appear in the sockeye events viewer.

Image 04

Conclusion

As you can see, using TriggerMesh Sources for AWS makes it easy to consume cloud events that occur in AWS services. Our example uses Sockeye for demonstration purposes: you can replace Sockeye with any of your Kubernetes workloads that would benefit from consuming and processing events from these popular AWS services.

The TriggerMesh SAWS package supports a number of AWS services. Refer to the README for each component to learn more. You can find sample configurations here.

Don’t miss the Computing on the Edge with Kubernetes conference on October 21.
Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

Driving Kubernetes Adoption in Finance with Rancher

星期二, 8 九月, 2020
See how Inventx reduced deployment time by 75% with Rancher

“Our portfolio is geared toward creating long-term digital business models in the financial industry. Inventx is the enabler for continuous business transformation. Rancher brings the flexibility and openness that helps us achieve true transformation in the most agile and efficient way.” Domenic Mayer, Senior Cloud Engineer and Solution Architect, Inventx

Supporting Microservices and Cloud Native

In Switzerland, Inventx is the IT partner of choice for financial and insurance service providers. Its full-stack DevOps platform, ix.AgileFactory, allows financial organizations to move to a modern, cloud-native and microservices-centric infrastructure. The platform decouples core applications from the central infrastructure, allowing organizations to better manage and innovate applications in safety.

Like most companies in the financial space, Inventx has a secure, on-premise architecture that, until four years ago, comprised a mix of VM-based IBM architecture and Linux (Red Hat) servers. Due to obvious customer sensitivities, security and compliance have always been major priorities.

Adopting Containers, Kubernetes and Rancher

Containers became a focus at the company in 2016, when Inventx developers started building and shipping images in Docker. It was clear that adopting a container strategy would be a much more lightweight, portable way to develop, shift and deploy applications. When Kubernetes adoption hastened in 2017, the team looked at management methodologies. They knew a “monocluster” model wouldn’t work; enabling digital transformation meant providing dedicated clusters for each customer, comprising development, testing and production environments. Crucially, the team wanted a unified cluster management platform that would provide simplified, multi-cluster management via a single pane of glass.

Gaining Efficiencies with Rancher

Inventx added Rancher to its existing infrastructure to provide multi-cluster, hybrid support. In Rancher, Inventx was able to manage any number of Kubernetes clusters in one place, via one pane of glass. For the first time, the company could consolidate management processes, monitor performance, update, patch and manage the entire Kubernetes estate in a unified way. Rancher also allowed the team to work with any mix of technologies, in the same platform.

Today, Rancher now underpins ix.AgileFactory. With the financial sector under pressure to be more agile, efficient and secure, Rancher answers those requirements by allowing organizations to manage via a single interface.

With Rancher, Inventx has reduced deployed time by 75 percent and increased deployment frequency by 100 percent. Read our case study to find out how they achieved these and other efficiency gains.

See how Inventx reduced deployment time by 75% with Rancher

Deploying Citrix ADC with Service Mesh on Rancher

星期三, 26 八月, 2020

Introduction

As a network of microservices changes and grows, the interactions between them can be difficult to manage and understand. That’s why it’s handy to have a service mesh as a separate infrastructure layer. A service mesh is an approach to solving microservices at scale. It handles routing and terminating traffic, monitoring and tracing, service delivery and routing, load balancing, circuit breaking and mutual authentication. A service mesh takes these components and makes them part of the underlying infrastructure layer, eliminating the need for developers to write specific code to enable these capabilities.

Istio is an popular open source service mesh that is built into the Rancher Kubernetes management platform. This integration allows developers focus on their business logic and leave the rest to Kubernetes and Istio.

Citrix ADC is a comprehensive application delivery and load balancing solution for monolithic and microservices-based applications. Its advanced traffic management capabilities enhance application performance and provide comprehensive security. Citrix ADC integrates with Istio as an ingress gateway to the service mesh environment and as a sidecar proxy to control inter-microservice communication. This integration allows you to tightly secure and optimize traffic into and within your microservice-based application environment. Citrix ADC’s Ingress deployment is configured as a load balancer for your Kubernetes services. As a sidecar proxy, Citrix ADC handles service-to-service communication and makes this communication reliable, secure, observable and manageable.

In this blog post, we’ll discuss the integration of Citrix ADC as an Istio ingress gateway and sidecar proxy in Istio service mesh deployed on Rancher. We’ll introduce new catalog templates for deploying for Citrix ADC as an ingress gateway and as a sidecar proxy injector.

The Rancher Apps Catalog provides a UI platform for DevOps engineers to deploy and run applications with out-of-the-box capabilities like monitoring, auditing and logging. You can find the Citrix Istio ingress gateway and sidecar injector in the Rancher catalog.

Image 01
Figure 1 Rancher Catalog for Citrix ADC in Istio Service Mesh

Citrix ADC as an Ingress Gateway for Istio

An Istio ingress gateway acts as an entry point for incoming traffic and secures and controls access to the service mesh. It also performs routing and load balancing. Citrix ADC CPX, MPX or VPX can be deployed as an ingress gateway to control the ingress traffic to Istio service mesh.

Citrix ADC MPX or VPX as Ingress Gateway

Image 02
Figure 2 Citrix ADC VPX/MPX as Ingress Gateway in Rancher Catalog

When Citrix ADC MPX/VPX is deployed as an Ingress Gateway device, the Istio-adaptor container primarily runs inside a pod managed by the Ingress Gateway deployment.

Citrix ADC CPX as an Istio Ingress Gateway

When Citrix ADC CPX is deployed as Ingress Gateway, both CPX and Istio-adaptor run as containers inside the Ingress Gateway Pod.

Image 03
Figure 3 Citrix ADC CPX as ingress gateway in Rancher Catalog

Citrix Istio Adaptor

Citrix Istio Adaptor is open source software written in Go. Its main job is to automatically configure the Citrix ADC deployed in the Istio service mesh. Components such as Istio Pilot, Citadel and Mixer make up the Istio control plane. The pilot is the control plane component that provides service discovery to proxies in the mesh. It’s essentially a gRPC xDS server and is responsible for configuring proxies at runtime.

Istio-adaptor is a gRPC client to the xDS server and receives xDS resources such as clusters, listeners, routes and endpoints from the xDS server over a secure gRPC channel. After receiving these resources, the Istio-adaptor converts them to the equivalent Citrix ADC configuration blocks and configures the associated Citrix ADC using RESTful NITRO calls. This blog talks about Citrix Istio Adaptor in detail.

In the next section, we’ll set up Citrix ADC as gateway and sidecar using the Rancher catalog. Ingress Gateway describes a load balancer operating at the edge of the mesh that receives incoming connections. Sidecar proxy is enforced for monitoring, security and resource distribution.

Rancher Catalog for Citrix ADC as an Istio Ingress Gateway

Prerequisites

In order to follow these steps, you will need to following:

  • A Rancher deployment (check out the quick start guide to get Rancher up and running)
  • A Kubernetes cluster, managed by Rancher (follow this guide to either import or provision a cluster)
  • Enable Istio.
  • Ensure that your cluster has Kubernetes version 1.14.0 or later and the admissionregistration.k8s.io/v1beta1 API is enabled.
  • Create a Kubernetes secret for the Citrix ADC user name and password. Choose Resources → Secrets in the navigation bar.

Steps:

  1. Log in to Rancher.
  2. Create a namespace named citrix-system.
  3. Go to the cluster, then project and Navigate to App → Launch.
  4. Search for citrix in search box.
  5. Click on citrix-adc-istio-ingress-gateway catalog.

    Image 04
    Figure 4 Citrix ADC as Ingress Gateway for Istio in Rancher Catalog

  6. Click on launch to deploy the Citrix ADC as ingress gateway.

a. For Citrix ADC CPX: Set the following environment variables with specified values:

i. Citrix ADC CPX – true
ii. ingressGateway EULA – true
iii. istioAdaptor.tag – 1.2.0

b. For Citrix ADC MPX/VPX: Set the following environment variables:

i. istioAdaptor version: 1.2.0
ii. netscalerUrl: Specify Citrix ADC IP in URL format (e.g. https://192.168.1.10)
iii. vServer IP: Specify yet to be used IP Address for Citrix ADC Virtual Server

  1. Once you update values of required parameters, click on launch. Navigate to Apps and verify that citrix-ingressgateway is running.

    Image 05
    Figure 5 Service citrix-ingressgateway Running in Rancher Catalog

Points to remember:

  • If you want to expose multiple applications:

Set exposeMutipleApps variable to true.

  • secretVolumes.name:
  • secretVolumes.secretName:
  • secretVolumes.mountPath:
  • If You want to expose non-HTTP services (such as TCP-based apps):

Set the exposeNonHttpService variable to true.

  • tcpPort.name:
  • tcpPort.nodePort: . // applicable in case of Citrix ADC CPX
  • tcpPort.Port:
  • tcpPort.targetPort:

Citrix ADC as a Sidecar for Istio

Citrix ADC CPX can act as a sidecar proxy to an application container in Istio. You can inject the Citrix ADC CPX manually or automatically using the Istio sidecar injector. Automatic sidecar injection requires resources including a Kubernetes mutating webhook admission controller and a service. Using the Rancher catalog, you can create resources required for automatically deploying Citrix ADC CPX as a sidecar proxy.

Image 06
Figure 6 Citrix ADC CPX as sidecar in Rancher Catalog

Deploying Citrix ADC as a Sidecar for Istio using Rancher Catalog

Prerequisites

The following prerequisites are required for deploying Citrix ADC as a sidecar in an application pod

  • Ensure that Istio is enabled.
  • Ensure that your cluster has Kubernetes version 1.14.0 or later and the admissionregistration.k8s.io/v1beta1 API is enabled.
  • Create resources required for automatic sidecar injection by performing the following steps:
  1. Download the webhook-create-signed-cert.sh script.

    curl -L https://raw.githubusercontent.com/citrix/citrix-istio-adaptor/master/deployment/webhook-create-signed-cert.sh > webhook-create-signed-cert.sh

  2. Change permissions of the script to executable mode.

    chmod +x webhook-create-signed-cert.sh

  3. Create a signed certificate, key pair and store it in a Kubernetes secret.
    ./webhook-create-signed-cert.sh 
    
    --service cpx-sidecar-injector 
    
    --secret cpx-sidecar-injector-certs 
    
    --namespace citrix-system

Important Note:

Do not enable Istio Auto Injection on application namespace.

To automatically deploy Citrix ADC CPX as a sidecar in application pod, the application namespace must be labeled with cpx-injection=enabled.

Kubectl label namespace <application_namespace> cpx-injection=enabled

Steps:

  1. Log in to Rancher.
  2. Create namespace named citrix-system.
  3. Go to the cluster then project and Navigate to Apps → Launch.
  4. Search for citrix into search box.
  5. Click on citrix-cpx-istio-sidecar-injector catalog.

    Image 07
    Figure 7 Citrix ADC CPX as sidecar in Rancher Catalog

  6. Set the environment variables:

a. istioAdaptor version: 1.2.0
b. cpxProxy.EULA : YES

  1. Update the values of the required parameters, click Launch.
  2. Navigate to Apps and verify that cpx-sidecar-injector is running.

    Image 08
    Figure 8 Service cpx-sidecar-injector Running in Rancher Catalog

Accessing a Sample Application using Citrix ADC

You can find an example of deploying the sample bookinfo application here.

  • If Citrix ADC VPX/MPX is deployed as ingress gateway, the service will be accessible via vServer IP. (This detail is mentioned in step 6b of Citrix ADC VPX as ingress gateway deployment).
  • If Citrix ADC CPX is deployed as ingress Gateway, then service will be accessible via Ingress IP and Port. Follow this link for more information.

Important Note: For deploying Citrix ADC VPX or MPX as an ingress gateway, you should establish the connectivity between Citrix ADC VPX or MPX and cluster nodes. This connectivity can be established by configuring routes on Citrix ADC as mentioned here or by deploying Citrix Node Controller.

Note: All images of the catalog were taken from Rancher version v.2.4.4, which supports Istio version 1.4.10 and Istio-adaptor with version 1.2.0. Learn more about the architecture here.

Conclusion

In this article, we have shown you how to configure ingress rules using Citrix ADC Istio ingress gateway and also sidecar proxy using Citrix CPX Istio Sidecar. The gateway allows external traffic to enter the service mesh and manage traffic for edge services. Citrix ADC as sidecar is used in service-to-service communication alongside each service through which all traffic is transparently routed.

Rancher’s catalog of Helm charts makes it easy to deploy and configure applications.

Learn how to run a multi-cluster service mesh in Rancher: watch our master class video.

Creating Memorable Gaming Experiences with Kubernetes

星期四, 13 八月, 2020

“Every technology decision is designed to make the experiences of players more satisfying. It’s not just the quality of the games, it’s how we  create  the  responsive, rewarding and engaging services that surround them. Rancher is essential to this.” Donald Havas, Senior Cloud Services Manager, Ubisoft

If you’re a gamer, you probably know how immersed you can get in your favorite game. Or if you’re the parent or partner of a gamer, you probably know what it’s like to try to get the attention of someone who is in “gaming mode.” Creating worlds and enriching players’ lives is in Ubisoft’s DNA. The French video game pioneer is the name behind some of the biggest gaming titles in history, including Assassin’s Creed, Far Cry, the Tom Clancy series and Just Dance.

Moving to Cloud-Native with Kubernetes

Boosting innovation and driving technical agility are the company’s primary goals – and Kubernetes is becoming essential to this strategy. With its sights set on global growth, Ubisoft has put Rancher Labs at the heart of its Ubisoft Kubernetes Service (UKS). With Kubernetes and Rancher, the technology team at Ubisoft is accelerating the journey toward a cloud-native, microservices-centric future. The aim? To gain a competitive edge through innovation and, critically, to drive serious management efficiencies.

Ubisoft’s infrastructure team was an early adopter of containers and standardized on Kubernetes in 2017 after seeing the momentum around it. Given the freedom to innovate with Kubernetes, Ubisoft’s teams of developers were galvanized. New container deployments began to spring up all over the business. While not required to work in Kubernetes, developers started to test it, creating new services and applications quickly.

When the inevitable happened – cluster sprawl – due to pockets of innovation and development across the company, they realized they needed a formal orchestration strategy. They wanted a solution that sat close to the upstream Kubernetes community. In March 2018, after a successful PoC, they started using Rancher.

Centralizing Cluster Management with Rancher

The team’s vision of a central Kubernetes provisioning platform to automate many basic processes came to life in the Ubisoft Kubernetes Service. This self-service Kubernetes platform, based on Rancher, gives thousands of developers the ability to spin up new Kubernetes clusters in an instant in a controllable, centrally managed way.

Ubisoft’s Kubernetes clusters host its internal video platform, shopping toolbox and a host of game administration tools that help teams plan discount and loyalty programs. Ubisoft’s gaming support tool – a user-facing service that uses machine learning to help support specialist to answer players – is another critical service.

Read our case study to hear how Ubisoft has reduced cluster deployment time by 80 percent – allowing them to spend more time innovating and creating satisfying player experiences.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

Monitor and Optimize Your Rancher Environment with Datadog

星期二, 4 八月, 2020
Read our free white paper: How to Build a Kubernetes Strategy

Many organizations use Kubernetes to quickly ship new features and improve the reliability of their services. Rancher enables teams to reduce the operational overhead of managing their cloud-native workloads — but getting continuous visibility into these environments can be challenging.

In this post, we’ll explore how you can quickly start monitoring orchestrated workloads with Rancher’s built-in support for Prometheus and Grafana. Then we’ll show you how integrating Datadog with Rancher can help you get even deeper visibility into these ephemeral environments with rich visualizations, algorithmic alerting, and other features.

The Challenges of Kubernetes Monitoring

Kubernetes clusters are inherently complex and dynamic. Containers spin up and down at a blistering rate: in a survey of more than 1.5 billion containers across thousands of organizations, Datadog found that orchestrated containers churned twice as fast (one day) as unorchestrated containers (two days).

In such fast-paced environments, monitoring your applications and infrastructure is more important than ever. Rancher includes baked-in support for open source monitoring tools like Prometheus and Grafana, allowing you to track basic health and resource metrics from your Kubernetes clusters.

Prometheus gathers metrics from Kubernetes clusters at preset intervals. While Prometheus has no visualization options, you can use Grafana’s built-in dashboards to display an overview of health and resource metrics, such as the CPU usage of your pods.

However, some open source solutions aren’t designed to keep tabs on large, dynamic Kubernetes clusters. Further, Prometheus requires users to learn PromQL, a specialized query language, to analyze and aggregate their data.

While Prometheus and Grafana can provide some level of insight into your clusters, they don’t allow you to see the full picture. For example, you’ll need to connect to one of Rancher’s supported logging solutions to access logs from your environment. And to troubleshoot code-level issues, you’ll also need to deploy an application performance monitoring solution.

Ultimately, to fully visualize your orchestrated clusters, you need to monitor all of these sources of data — metrics, traces and logs — in one platform. By delivering detailed, actionable data to teams across your organization, a comprehensive monitoring solution can help reduce mean time to detection and resolution (MTTD and MTTR).

The Datadog Agent: Auto-Discover and Autoscale Services

To get ongoing visibility into every layer of your Rancher stack, you need a monitoring solution specifically designed to track cloud-native environments in real time. The Datadog Agent is lightweight, open source software that gathers metrics, traces and logs from your containers and hosts, and forwards them to your account for visualization, analysis and alerting.

Because Kubernetes deployments are in a constant state of flux, it’s impossible to manually track which workloads are running on which nodes, or where your containers are running. To that end, the Datadog Agent uses Autodiscovery to detect when containers spin up or down, and automatically starts collecting data from your containers and the services they’re running, like etcd and Consul.

Kubernetes’ built-in autoscaling functionality can help improve the reliability of your services by automatically scaling workloads based on demand (such as a spike in CPU usage). Autoscaling also helps manage costs by rightsizing your infrastructure.

Datadog extends this feature by enabling you to autoscale Kubernetes workloads based on any metric you’re already monitoring in Datadog — including custom metrics. This can be extremely useful for scaling your cluster in response to fluctuations in demand, particularly during business-critical periods like Black Friday. Let’s say that your organization is a retailer with a bustling online presence. When sales are taking off, your Kubernetes workloads can autoscale based on a custom metric that serves as an indicator of activity, such as the number of checkouts, to ensure a seamless shopping experience. For more details about autoscaling Kubernetes workloads with Datadog, check out our blog post.

Kubernetes-Specific Monitoring Features

Regardless of whether your environment is multi-cloud, multi-cluster or both, Datadog’s highly specialized features can help you monitor your containerized workloads in real time. Datadog automatically enriches your monitoring data with tags imported from Kubernetes, Docker, cloud services and other technologies. Tags provide continuous visibility into any layer of your environment, even as individual containers start, stop or move across hosts. For example, you can search for all containers that share a common tag (e.g., the name of the service they’re running) and then use another tag (e.g., availability zone) to break down their resource usage across different regions.

Datadog collects more than 120 Kubernetes metrics that help you track everything from Control Plane health to pod-level CPU limits. All of this monitoring data can be accessed directly in the app — no query language needed.

Datadog provides several features to help you explore and visualize data from your container infrastructure. The Container Map provides a bird’s-eye view of your Kubernetes environment, and allows you to filter and group containers by any combination of tags, like docker_image, host and kube_deployment.

You can also color-code containers based on the real-time value of any resource metric, such as System CPU or RSS Memory. This allows you to quickly spot resource contention issues at a glance — for instance, if a node is consuming more CPU than others.

Image 01

The Live Container view displays process-level system metrics — graphed at two-second granularity — from every container in your infrastructure. Because metrics like CPU utilization can be extremely volatile, this high level of granularity ensures that important spikes don’t get lost in the noise.

Image 02

Both the Container Map and the Live Container view allow you to filter and sort containers using any combination of tags, such as image name or cloud provider. For more detail, you can also click to inspect the processes running on any individual container — and view all the metrics, logs and traces collected from that container, with a few clicks. This can help you debug issues and determine if you need to adjust your provisioning of resources.

With Datadog Network Performance Monitoring (NPM), you can track the real-time flow of network traffic across your Kubernetes deployments and quickly debug issues. By nature, Docker containers are constrained only by the amount of CPU and memory available. As a result, a single container can saturate the network and bring the entire system down.

Datadog can help you easily isolate the containers that are consuming the most network throughput and identify possible root causes by navigating to correlated logs or request traces from that service.

Datadog + Rancher Go Together

Datadog works in tandem with Rancher, so you can use Rancher to manage diverse, orchestrated environments and deploy Datadog to monitor, troubleshoot and automatically scale them in real time.

Additionally, Watchdog, Datadog’s algorithmic monitoring engine, uncovers and alerts team members to performance anomalies (such as latency spikes or high error rates). This allows teams to get ahead of potential issues (such as an abnormally high rate of container restarts) before they escalate.

We’ve shown you how Datadog can help you get comprehensive visibility into your Rancher environment. With Datadog, engineers can use APM to identify bottlenecks in individual requests and pinpoint code-level issues, collect and analyze logs from every container across your infrastructure and more. By unifying metrics, logs and traces in one platform, Datadog removes the need to switch contexts or tools. Thus, your teams can speed up their troubleshooting workflows and leverage the full potential of Rancher as it manages vast, dynamic container fleets.

With Rancher’s Datadog Helm chart, your teams can start monitoring their Kubernetes environments in minutes — with minimal onboarding. If you’re not currently a Datadog customer, sign up today for a free 14-day trial.

Read our free white paper: How to Build a Kubernetes Strategy

Global Energy Leader Transforms Technology and Culture with Kubernetes

星期三, 29 七月, 2020

“When I look at the most advanced digital organizations such as Google, Netflix, Amazon and Facebook, they’re running service-orientated architectures, with estates of microservices, completely decoupled from one another but managed centrally. We aspire to reach this point and Rancher is an important part of the journey.” Anthony Andrades, Head of Global Infrastructure Strategy, Schneider Electric

When your company is born in the first Industrial Revolution, how do you stay relevant in the digital age? For Schneider Electric, the answer is continuous innovation, driven by its heritage in the electricity market. Founded in the 1880s, Schneider Electric is a leading provider of energy and automation digital solutions for efficiency and sustainability. Believing access to energy and digital services is a basic human right, Schneider Electric creates integrated solutions for homes, commercial and municipal buildings, data centers and industrial infrastructure. By putting efficiency and sustainability at the heart of the portfolio, the company helps consumers and businesses make the most of their energy resources.

A Digital Transformation Turning Point

Today, Schneider Electric is at a turning point – embarking on a significant transformation by modernizing its legacy systems to create a cluster of cloud-native microservices to become more agile and innovative. The company started its move to the cloud in 2013, with a couple of business-driven projects running on Amazon Web Services (AWS). By 2016, their AWS footprint was global, and an infrastructure migration was underway. At the same time, they were experimenting with Kubernetes but faced some challenges with access control.

In 2018, the company carried out a successful proof of concept (PoC) with Rancher Labs and security partner Aqua. This resulted in deploying Rancher on top of Kubernetes to provide access control, identity management and globalized performance metrics. A year later, Schneider chose Rancher to underpin its container management platform, deploying it on 20 nodes.

The company has been undergoing technical evolution for 25 years, in which they built and deployed thousands of separate services and applications running on Windows Server or Red Hat. Now these services must be re-engineered or rebuilt before migrating to the cloud – a process that they expect to take five years. In 2019, the team started the painstaking process of analyzing the entire estate of applications, categorizing each one according to the most appropriate and efficient way to modernize and migrate.

Successful Migration to Rancher

Over the last year, the team has successfully migrated four applications, which are now managed in 40 nodes with Rancher. With Rancher’s intuitive interface, the team can quickly check the status of clusters without having to manually check performance, workload status or resource usage. The team appreciates that they don’t need to worry about the underlying infrastructure.

Read our case study to hear more about Schneider Electric’s technical and cultural transformation and why their relationship with Rancher is critical for success.

The Power of Innovation

星期二, 21 七月, 2020
Learn more about Rancher’s innovative approach to Kubernetes management

CEO and Co-Founder Sheng Liang has a saying about how we approach open source at Rancher Labs: “Let a thousand flowers bloom.” When we set out to build something, we don’t know if it will turn into a successful product, spark another product idea or be a good idea that doesn’t get traction. The joy is in the journey.

Take K3s, our lightweight Kubernetes distribution. We didn’t start out developing K3s – it grew organically out of a project called Rio. K3s was inspired by the insight and passion of our developers who saw a need for a Kubernetes distribution for IoT and the edge. These forward thinkers were right. According to Gartner, 75 percent of enterprise data will be created and processed outside of data centers and cloud deployments by 2025.

CRN Recognizes Rancher Labs for Innovation

K3s has influenced other innovative products in the open source community, such as k3sup and k3d. We’re proud that K3s has gained a loyal following and that it continues win accolades. CRN recently included K3s in its roundup of The 10 Coolest Open-Source Software Tools of 2020 (So Far) due to its small binary (under 40 MB) that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster. CRN recognized K3s and other tools on the list, such as Helm and Envoy, for leading the industry toward greater adoption of agile development and DevOps methods, AI, cloud-native architecture and advanced security.

No doubt the buzz around K3s influenced CRN to include Rancher Labs in another list: the 2020 Emerging Vendors. Honoring “rising technology suppliers that exhibit great promise in shaping the future success of the channel with their dedication to innovation,” CRN’s list provides a resource for solution providers in search of the latest technologies.

At Rancher Labs, partnerships are crucial to our business. We rely on more than 200 solution providers worldwide who deliver customer offerings around our products, including Rancher, our enterprise platform for Kubernetes management. These partners are also innovators, inspired to help their customers do things better.

Being an innovator means knowing you can’t go it alone –- and our ecosystem partners are critical to that. Rancher includes a global catalog of applications that our users can easily integrate into their environments to maximize productivity and reliability.

At the end of the day, we want to make our users’ lives better by making Kubernetes clusters easier to deploy and manage so that they can focus on the business at hand. For most organizations, that’s what innovation is: finding better ways to solve problems.

The idea of letting a thousand flowers bloom means being able to evolve our technologies and take the best parts of the things we’ve developed. Sometimes it means admitting that a technology you love isn’t going to make it. It’s having insight into what technology will best solve customer problems and driving adoption of that technology. For Rancher Labs, embracing open source means embracing the best of innovation –- no matter where it comes from. That’s why we’re 100 percent open source with no vendor lock-in.

As we look to the future of Rancher Labs, one thing is sure. Innovation will continue to be a driving force in everything we do. We’ll continue to plant the seeds of innovation and watch them grow.

Learn more about Rancher’s innovative approach to Kubernetes management
Tags: ,,,,, Category: Products, Rancher Kubernetes Comments closed

SUSE Enters Into Definitive Agreement to Acquire Rancher Labs

星期三, 8 七月, 2020

Read our free white paper: How to Build a Kubernetes Strategy

I’m excited to announce that Rancher has signed a definitive agreement to be acquired by SUSE. Rancher is the most widely used enterprise Kubernetes platform. SUSE is the largest independent open source software company and a leader in enterprise Linux. By combining Rancher and SUSE, we not only gain massive engineering resources to further strengthen our market-leading product, we are also able to preserve our unique 100% open source business model.

We started Rancher 6 years ago to develop the next generation enterprise computing platform built on a relatively new technology called containers. We could not have anticipated the tremendous growth and popularity of the Kubernetes technology. Rancher was able to thrive in this exciting and highly dynamic market because we developed innovative products loved by end users. Grass-roots adoption coupled with a unique enterprise-grade support subscription led to our hypergrowth. I want to thank everyone who has used our products over these last six years for your support, and for helping us build an amazing community of users.

After the acquisition closes later this year, I will lead the combined engineering and innovation organization at SUSE. You can expect an accelerated pace of product innovation. And given SUSE’s 28-year history building a highly successful open source business, our commitment to open source will remain strong.

The acquisition is great for Rancher customers and partners. At Rancher we take pride in our industry-leading customer satisfaction with an NPS score of over 80. SUSE’s global reach and enterprise focus will further strengthen our commitment to customers who rely on Rancher to power mission-critical workloads. Likewise, SUSE’s strong ecosystem will greatly accelerate Rancher’s on-going efforts to transform how organizations adopt cloud native technology.

This acquisition is a launch point for further growth of Rancher. I feel as invigorated as day-1 about the industry, the technology, and our business. I am so proud of our team and the work they have done these last six years, and I look forward to continuing to work with our users, customers, partners, and fellow Ranchers to build a truly amazing business by leveraging the best parts of Rancher and SUSE. Rancher and SUSE together will be the enterprise computing company that transforms our industry.

Read our free white paper: How to Build a Kubernetes Strategy

Tags: ,, Category: 未分类 Comments closed

Delivering Inspiring Retail Experiences with Rancher

星期三, 27 五月, 2020

“As our business grew, we knew there would be economies in working with an orchestration partner. Rancher and Kubernetes have become enablers for the growth of our business.” – Joost Hofman, Head of Site Reliability Engineering Digital Development, Albert Heijn

When it comes to deciding where and how to shop for food, consumers have a choice. And it may only take one negative experience with a retailer for a consumer to take their business elsewhere. For food retail leader Albert Heijn, customer satisfaction and innovation at its 950+ retail stores and e-commerce site are driving forces. As the top food retailer in the Netherlands (and with stores in Belgium), the company works to inspire, surprise and provide rewarding experiences to its customers – and has a mission to be the most loved and healthiest company in the Netherlands.

Adopting Containers for Innovation and Scalability

Not surprisingly, the fastest growing part of Albert Heijn’s business is its e-commerce site – with millions of visitors each year and expectations for those numbers to double in the coming years. With a focus on the future of grocery shopping and sustainability, Albert Heijn is at the forefront of container adoption in the retail space. Since first experimenting with containers in 2016, they are now the preferred way for the company’s 200 developers to manage the continuous development process and run many services on e-commerce site AH.nl in production. By using containers, developers can push new features to the e-commerce site faster – improving customer experience and loyalty.

Before adopting containers, Hofman’s team ran a traditional, monolithic infrastructure that was costly and unwieldy. With a vision of unified microservices and an open API to support future growth, they started experimenting with containers in 2016. While they experienced uptime of 99.95 percent after just six months, they faced other challenges and realized they needed a container management solution.

In 2018, Hofman turned to Rancher as the platform to manage its containers more effectively as they migrated to an Azure cloud. Today, with Rancher, their infrastructure is set up to scale, as the user numbers are expected to grow dramatically. With Rancher automating a host of basic processes, developers are free to innovate.

High availability is also a critical need for the company – because online shopping never sleeps. With a microservices-based environment built on Kubernetes and Rancher, developers can develop, test and deploy services in isolation and ensure reliable, fast releases of new services.

Today, with a container-based infrastructure, the company has reduced management hours and testing time by 80 percent and achieved 99.95 percent uptime.

Read our case study to hear how, with Rancher, Hofman and the AH.nl team have embraced containers as a way to focus on innovation and staying ahead of the competition.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed