Delivering Inspiring Retail Experiences with Rancher

Wednesday, 27 May, 2020

“As our business grew, we knew there would be economies in working with an orchestration partner. Rancher and Kubernetes have become enablers for the growth of our business.” – Joost Hofman, Head of Site Reliability Engineering Digital Development, Albert Heijn

When it comes to deciding where and how to shop for food, consumers have a choice. And it may only take one negative experience with a retailer for a consumer to take their business elsewhere. For food retail leader Albert Heijn, customer satisfaction and innovation at its 950+ retail stores and e-commerce site are driving forces. As the top food retailer in the Netherlands (and with stores in Belgium), the company works to inspire, surprise and provide rewarding experiences to its customers – and has a mission to be the most loved and healthiest company in the Netherlands.

Adopting Containers for Innovation and Scalability

Not surprisingly, the fastest growing part of Albert Heijn’s business is its e-commerce site – with millions of visitors each year and expectations for those numbers to double in the coming years. With a focus on the future of grocery shopping and sustainability, Albert Heijn is at the forefront of container adoption in the retail space. Since first experimenting with containers in 2016, they are now the preferred way for the company’s 200 developers to manage the continuous development process and run many services on e-commerce site AH.nl in production. By using containers, developers can push new features to the e-commerce site faster – improving customer experience and loyalty.

Before adopting containers, Hofman’s team ran a traditional, monolithic infrastructure that was costly and unwieldy. With a vision of unified microservices and an open API to support future growth, they started experimenting with containers in 2016. While they experienced uptime of 99.95 percent after just six months, they faced other challenges and realized they needed a container management solution.

In 2018, Hofman turned to Rancher as the platform to manage its containers more effectively as they migrated to an Azure cloud. Today, with Rancher, their infrastructure is set up to scale, as the user numbers are expected to grow dramatically. With Rancher automating a host of basic processes, developers are free to innovate.

High availability is also a critical need for the company – because online shopping never sleeps. With a microservices-based environment built on Kubernetes and Rancher, developers can develop, test and deploy services in isolation and ensure reliable, fast releases of new services.

Today, with a container-based infrastructure, the company has reduced management hours and testing time by 80 percent and achieved 99.95 percent uptime.

Read our case study to hear how, with Rancher, Hofman and the AH.nl team have embraced containers as a way to focus on innovation and staying ahead of the competition.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Rancher Academy Has Moved!

Tuesday, 19 May, 2020

Editor’s Note: Since the launch of Rancher Academy in 2020, a lot has happened. Rancher Academy has evolved into Academy classes, now available in the SUSE & Rancher Community. Our Up and Running: Rancher class aligns with the latest release of Rancher (Rancher 2.6). The class is available on demand. Other Rancher Academy Classes include Up and Running: K3s and Accelerate Dev Workflows.

Today we launched the Rancher Academy, our new free training portal. The first course is Certified Rancher Operator: Level 1, and in it you’ll learn exactly how to deploy Rancher and use it to deploy and manage Kubernetes clusters. This professional certification program is designed to help Kubernetes practitioners demonstrate their knowledge and competence with Kubernetes and Rancher – and to advance in their careers.

Why is Rancher Labs doing this? We want all the members of our community to have the most relevant and up to date skills in Rancher – the most widely adopted Kubernetes management platform. We also want to give you the skills to be at the forefront of the cloud-native way of doing business, which is agile, open source oriented and maniacally focused on innovation.

Market Demand for Kubernetes Skills Far Exceeds Supply

We’re seeing massive demand in the industry for people with Kubernetes skills, and it’s continuing to rise as organizations adopt cloud-native strategies and embrace Kubernetes. There’s nowhere near enough supply right now. What I’m seeing in the industry is that organizations are trying to quickly get their teams up to speed on Kubernetes. You’ve got people with reliable non-cloud-native skill sets, or non-Kubernetes skill sets, who are suddenly being given these Kubernetes environments that they need to maintain.

Businesses and governments all over the world use Rancher to deploy and manage their Kubernetes clusters. People in those organizations are working with Rancher, but they might have learned it through the filter of their past knowledge and experience.

That’s where Rancher Academy comes in. Our objective is for an individual to go to an organization and say, “I have a certification from Rancher Labs,” and for organizations to know exactly what that means: that they were trained by Rancher, and that we’ve given our approval of their ability to execute according to our standards.

Is Rancher Academy Right for You?

Our first course, Up and Running: Rancher, is designed for people who want to install Rancher and use it to deploy and manage Kubernetes clusters. You’ll need to have some basic Kubernetes knowledge, but you don’t need to know anything about Rancher.

We intentionally chose not to include Docker or Kubernetes fundamentals in our course because there are other training courses that cover that material. Until today there were no courses specifically for Rancher.

The course starts off talking about the Rancher architecture, installing RKE (one of our two certified Kubernetes distributions) and installing Rancher into it. Whether you are brand new to Rancher or have been using it for a while, you’ll find value in the course. For those experienced with Rancher, you’ll validate the skills you already have, and perhaps learn some slightly different ways of doing things that are the official “Rancher-sanctioned” way. And for those new to Rancher, you’ll walk away with the confidence that you are using Rancher the right way.

Rancher Academy: How It Works

The program is online and self-paced, with Level 1 designed to be completed over five weeks. The course includes four hours of video content, with 87 units of instruction, quizzes, 37 hands-on labs and a final assessment.

The labs are designed for you to do on your own. The idea here is that we’re building muscle memory: you learn about it, you see it demonstrated and then you do it yourself. As you progress through the course, you build and maintain an infrastructure, and by the end of the course, you’ll have a highly available Rancher deployment with at least one downstream cluster.

Now you might be saying, “Whoa, this sounds like a lot of work.” The beauty of the course is that it’s self-paced. If you follow the five-week model, you’ll need to spend about three to five hours a week. On the other hand, if you’re so excited that you want to just blaze through it in a week, you can do that.

Throughout the course, as you’re learning the material, you can validate what you’ve learned through the quizzes, and you can easily go back if you need to repeat something. Along the way you’ll be building an environment, testing out workloads, trying out persistent storage and encountering challenges that are unique to your infrastructure. You’ll be developing the skills to solve those challenges, and you can get help along the way from the community.

 

 

Driving Sustainability in Retail with Kubernetes

Tuesday, 28 April, 2020

“With sustainability our primary focus, our technology strategy has to mirror our overall approach. With Rancher we’re driving real transformation to prime us for long-term growth.”
– Zach Dunn, Senior Director of Platform Operations and CSO, Optoro

Have you ever considered what happens to items you return to etailers? In retail, especially ecommerce, nearly 25 percent of all goods are returned or don’t sell. And in the US alone, the value of these goods is a staggering $500 billion – usually written off as losses by etailers. Beyond the economic impact, there are environmental consequences: many goods ending up in landfills.

Optoro aims to break this cycle. As the world’s leading returns optimization platform, Optoro has pioneered a reverse logistics model to solve this excess goods problem. Using machine learning and predictive analytics, they route returned and excess goods to their next best home, whether it’s an end consumer, charity or recycler – anywhere in the world. Optoro operates a consumer resale site,  www.blinq.com, and a wholesale site, www.BULQ.com. The company estimates that they have diverted 3.9 million pounds of waste from landfills, prevented 22.7 million pounds of carbon emissions, and donated 2.7 million items to charities.

Soon after joining the company, Senior Director of Platform Operations and CSO Zack Dunn decided to move the company from a cloud-based infrastructure to on premises. Optoro had a steady state in terms of costs; APIs and databases were never powered down and so costs would increase or decrease with cloud expansion and contraction. Transitioning into a data center, Dunn could level-set his costs – driving greater predictability into financial management.

After converting their estate of VMs into Docker containers, Dunn and his team started experimenting with Kubernetes. While a move Kubernetes made sense, they didn’t want to absorb additional costs – and could not find a business case for OpenShift, GKE or EKS. They wanted a platform that allowed their developers to consolidate role-based access control and other backend processes and directly manage their clusters through an intuitive UI. Rancher checked all the boxes.

Following a successful proof of concept, the team started to migrate its services into containers and into Rancher. Currently it runs 12 of its 42 services in production, with plans to migrate the entire infrastructure.

Watch our video case study and hear directly from Dunn about Optoro’s journey from the cloud to the data center and the benefits of adopting Kubernetes and Rancher.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Enabling More Effective Kubernetes Troubleshooting on Rancher

Thursday, 16 April, 2020

As a leading, open-source multi-cluster orchestration platform, Rancher lets operations teams deploy, manage and secure enterprise Kubernetes. Rancher also gives users a set of container network interface (CNI) options to choose from, including open source Project Calico. Calico provides native Layer 3 routing capability for Kubernetes pods, which simplifies the networking architecture, increases networking performance and provides a rich network policy model makes it easy to lock down communication so the only traffic that flows is the traffic you want to flow.

A common challenge in deploying Kubernetes is gaining the necessary visibility into the cluster environment to effectively monitor and troubleshoot networking and security issues. Visibility and troubleshooting is one of the top three Kubernetes use cases that we see at Tigera. It’s especially critical in production deployments because downtime is expensive and distributed applications are extremely hard to troubleshoot. If you’re with the platform team, you’re under pressure to meet SLAs. If you’re on the DevOps team, you have production workloads you need to launch. For both teams, the common goal is to resolve the problem as quickly as possible.

Why Troubleshooting Kubernetes is Challenging

Since Kubernetes workloads are extremely dynamic, connectivity issues are difficult to resolve. Conventional network monitoring tools were designed for static environments. They don’t understand Kubernetes context and are not effective when applied to Kubernetes. Without Kubernetes-specific diagnostic tools, troubleshooting for platform teams is an exercise in frustration. For example, when a pod-to-pod connection is denied, it’s nearly impossible to identify which network security policy denied the traffic. You can manually log in to nodes and review system logs, but this is neither practical nor scalable.

You’ll need a way to quickly pinpoint the source of any connectivity or security issue. Or better yet, gain insight to avoid issues in the first place. As Kubernetes deployments scale up, the limitations around visibility, monitoring and logging can result in undiagnosed system failures that cause service interruptions and impact customer satisfaction and your business.

Flow Logs and Flow Visualization

For Rancher users who are running production environments, Calico Enterprise network flow logs provide a strong foundation for troubleshooting Kubernetes networking and security issues. For example, flow logs can be used to run queries to analyze all traffic from a given namespace or workload label. But to effectively troubleshoot your Kubernetes environment, you’ll need flow logs with Kubernetes-specific data like pod, label and namespace, and which policies accepted or denied the connection.

Calico Enterprise Flow Visualizer
Calico Enterprise Flow Visualizer

A large proportion of Rancher users are DevOps teams. While ITOps has traditionally managed network and security policy, we see DevOps teams looking for solutions that enable self-sufficiency and accelerate the CI/CD pipeline. For Rancher users who are running production environments, Calico Enterprise includes a Flow Visualizer, a powerful tool that simplifies connectivity troubleshooting. It’s a more intuitive way to interact with and drill down into network flows. DevOps can use this tool for troubleshooting and policy creation, while ITOps can establish a policy hierarchy using RBAC to implement guardrails so DevOps teams don’t override any enterprisewide policies.

Firewalls Can Create a Visibility Void for Security Teams

Kubernetes workloads make heavy use of the network and generate a lot of east/west traffic. If you are deploying a conventional firewall within your Kubernetes architecture, you will lose all visibility into this traffic and the ability to troubleshoot. Firewalls don’t have the context required to understand Kubernetes traffic (namespace, pod, labels, container id, etc.). This makes it impossible to troubleshoot networking issues, perform forensic analysis or report on security controls for compliance.

To get the visibility they need, Rancher users can deploy Calico Enterprise to translate zone-based firewall rules into Kubernetes network policies that segment the cluster into zones and apply the correct firewall rules. Your existing firewalls and firewall managers can then be used to define zones and create rules in Kubernetes the same way all other rules have been created. Traffic crossing zones can be sent to the Security team’s security information and event management (SIEM), providing them with the same visibility for troubleshooting purposes that they would have received using their conventional firewall.

Other Kubernetes Troubleshooting Considerations

For Platform, Networking, DevOps and Security teams using the Rancher platform, Tigera provides additional visibility and monitoring tools that facilitate faster troubleshooting:

  • The ability to add thresholds and alarms to all of your monitored data. For example, a spike in denied traffic triggers an alarm to your DevOps team or Security Operations Center (SOC) for further investigation.
  • Filters that enable you to drill down by namespace, pod and view status (such as allowed or denied traffic)
  • The ability to store logs in an EFK (Elasticsearch, Fluentd and Kibana) stack for future accessibility

Whether you are in the early stages of your Kubernetes journey and simply want to understand the “why” of unexpected cluster behavior, or you are in large-scale production with revenue-generating workloads, having the right tools to effectively troubleshoot will help you avoid downtime and service disruption. During the upcoming Master Class, we’ll share troubleshooting tips and demonstrate some of the tools covered in this blog, including flow logs and Flow Visualizer.

Join our free Master Class: Enabling More Effective Kubernetes Troubleshooting on Rancher on May 7 at 1pm PT.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Privacy Protections, PCI Compliance and Vulnerability Management for Kubernetes

Wednesday, 8 April, 2020

Containers are becoming the new computing standard for many businesses. New technology does not protect you from traditional security concerns. If your containers handle any sensitive data, including personally identifiable information (PII), credit cards or accounts, you’ll need to take a ‘defense in depth’ approach to container security. The CI/CD pipeline is vulnerable at every stage, from build to ship to runtime.

In this article, we’ll look at best practices for protecting sensitive data and enforcing compliance, from vulnerability management to network segmentation. We’ll also discuss how NeuVector simplifies security, privacy and compliance throughout the container lifecycle for organizations using Rancher’s kubernetes management platform.

Shift-Left Security

The DevOps movement is all about shifting left, and security is no different. The more security we can build in earlier in the process, the better for developers and the security team. The concept of security policy as code puts more control into developers hands while ensuring compliance with security mandates. Best practices include:

Comprehensive vulnerability management

Vulnerability detection and management throughout the CI/CD pipeline is essential. In order to prevent vulnerabilities from being introduced into registries, organizations should create policy-based build success/failure criteria. As a further safeguard, they should monitor and auto-scan all major registries such as AWS Elastic Container Registry, Docker, Azure Container Registry (ACR) and jFrog Artifactory. And finally, they should automatically scan running containers and host OSes for vulnerabilities to prevent exploits and other attacks on critical business data. With an auto-scanning infrastructure in place, containers can be auto-quarantined based on a vulnerability criteria.

Recommendation:

  • Scan the Rancher OS (or other OS)
  • Integrate and automate scanning with Jenkins plug-in or other build-phase scanning extensions, plus registry scanning
  • Employ admission control to prevent deployment of vulnerable images
  • Scan running containers and hosts for vulnerabilities, preventing ‘back-door’ vulnerable images
  • Protect running containers from vulnerability exploits with ‘virtual patching’ or other security controls to prevent unauthorized network or container behavior.

Adherence to the Center for Internet Security Benchmarks for Kubernetes and Docker

The CIS benchmarks provide strong security auditing for container, orchestrator and host configurations to ensure that proper security controls are not overlooked or disabled. These checks should be run before containers are put into production, and continuously run after deployment, as updates and restarts can often change such critical configurations. Patching, updating and restarting hosts can also inadvertently open security holes that were previously locked down.

Recommendation:

  • Use CIS Scan in Rancher 2.4 to run CIS benchmarks for Rancher managed Kubernetes clusters and the containers running on them.
  • Augment CIS benchmarks with any customized auditing or compliance checks on hosts or containers which are required by your organization.

Privacy

Privacy is a critical component of many compliance standards. However, container environments raise PCI Data Security Standard (DSS) – and likely GDPR and HIPAA – compliance challenges in the areas of monitoring, establishing security controls and limiting the scope of the Cardholder Data Environment (CDE) with network segmentation. Due to the ephemeral nature of containers – spinning up and down quickly and dynamically, and often only existing for several minutes – monitoring and security solutions must be active in real-time and able to automatically respond to rapidly transforming attacks.

Because most container traffic is internal communication between containers, traditional firewalls and security systems designed to vet external traffic are blind to nefarious threats that may escalate within the container environment. And the use of containers can increase the CDE, requiring critical protections to the size of the entire microservices environment unless limited by a container firewall able to fully visualize and tightly control its scope.

Recommendation:

  • Inspect network connections from containers within and exiting the Rancher cluster for unencrypted credit card or Personally Identifiable Information (PII) data using network DLP
  • Provide the required network segmentation for in-scope (CDE) traffic for application containers deployed by and run on Rancher

Compliance (PCI, GDPR, HIPAA and More)

Containers and microservices are inherently supportive of PCI DSS compliance across several fronts. In an ideal microservices architecture, each service and container delivers a single function, which is congruent with the PCI DSS requirement to implement only a single primary function with each server. In the same way, containers provide narrow functionality by design, meeting the PCI DSS mandate to enable only necessary protocols and services.

One might think that physically separate container environments that are in-scope would resolve issues, but this can severely restrict modern automated DevOps CI/CD pipelines and result in slower release cycles and underused resources. However, cloud-native container firewalls are emerging which provide the required network segmentation without the sacrifice of the business benefits of containers.

Recommendation:

  • Deploy a cloud-native firewall to automate network segmentation required by compliance standards such as PCI.
  • Maintain forensic data, logs and notifications for security events and other changes.

How NeuVector Enhances Rancher Security

NeuVector extends Rancher’s capabilities to support and enforce PCI-DSS, GDPR and HIPAA compliance requirements by auditing, monitoring and securing production deployments built on Rancher including:

  • Providing a comprehensive vulnerability management platform integrated with Rancher admission controls and run-time visibility.
  • Enforcing network segmentation based on layer 7 application protocols, so that no unauthorized connections are allowed in or out of containers.
  • Enforcing that encrypted SSL connections are used for transmitting sensitive data between containers and for ingress/egress connections.
  • Monitoring all unencrypted connections for sensitive data and either alerting or blocking when detected.

The NeuVector container security platform is an end-to-end solution for securing the entire container pipeline from build to ship to run-time. The industry’s first container firewall provides the critical function to perform automated network segmentation and container DLP by inspecting all container connections for sensitive data such as credit cards, PII and financial data. The screen shot below shows an example of unencrypted credit card data being transmitted between pods, as well as to an external destination.

Image 1

A container firewall solution provides network segmentation, network monitoring and encryption verification – meeting regulatory compliance requirements. PCI-DSS requires network segmentation as well as encryption for in-scope CDE environments. The NeuVector container firewall provides the required network segmentation of CDE workloads, while at the same time monitoring for unencrypted cardholder data which would violate the compliance requirements. The violations can be the first indications of a data breach, a misconfiguration of an application container or an innocent mistake made by a customer support person pasting in credit card data into a case.

Next Steps for Securing Your Container Infrastructure

For organizations transitioning to container infrastructure, it is important to recognize that security is important throughout the lifecycle of the container. Compliance and privacy regulations require protection of customer’s information wherever it resides on the organization’s network.

In this article, we looked at some of the ways that you can protect sensitive data and enforce compliance in your container infrastructure. To learn more, join us for our free Master Class: How to Automate Privacy Protections, PCI Compliance and Vulnerability Management for Kubernetes on May 5.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Transforming Telematics with Kubernetes and Rancher

Wednesday, 11 March, 2020

“As we extend our leadership position in Europe, it’s never been more important to put containers at the heart of our growth strategy. The flexibility and scale that Rancher brings is the obvious solution for high-growth companies like ours.” – Thomas Ornell, IT Infrastructure Engineer, ABAX

Norwegian leader in fleet management, equipment and vehicle tracking, ABAX is one of Europe’s fastest-growing technology businesses. The company provides sophisticated fleet tracking, electronic mileage logs and equipment and vehicle control systems to more than 26,500 customers. ABAX manages over 250,000 active subscriptions that connect a variety of vehicles and industrial equipment subscriptions.

The team recently signed an international deal with Hitachi to provide operational monitoring in Hitachi heavy machinery to help owners access operational data. ABAX saves customers millions of dollars every year by preventing the loss and theft of valuable machinery and equipment through granular monitoring of corporate fleet performance.

Thomas Ornell, an IT infrastructure engineer, has been priming ABAX’ infrastructure for significant growth over the past couple of years. Ornell and his team have transformed the company’s innovation strategy, putting containers — and Rancher — at the heart of bold expansion plans. Read our case study to find out how, with Rancher, ABAX is reducing testing time by 75 percent and recovery time by 90 percent.

Looking at how to get the most out of your Kubernetes deployments? Download our White Paper, How to Build an Enterprise Kubernetes Strategy.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Running Containers in AWS with Rancher

Tuesday, 10 March, 2020

READ OUR FREE WHITE PAPER:
How to Build an Enterprise Kubernetes Strategy

This blog will examine how Rancher improves the life of DevOps teams already invested in AWS’s Elastic Kubernetes Service (EKS) but looking to run workloads on-prem, with other cloud providers or, increasingly, at the edge. By reading this blog you will also discover how Rancher helps you escape the undeniable attractions of a vendor monoculture while lowering costs and mitigating risk.

AWS is the world’s largest cloud provider, with over a million customers and $7.3 billion in 2018 operating income. Our friends at StackRox recently showed that AWS still commands 78 percent market share despite the aggressive growth of rivals Microsoft Azure and Google Cloud Platform.

However, if you choose only AWS services for all your Kubernetes needs, you’re effectively locking yourself into a single vendor ecosystem. For example, by choosing Elastic Load Balancing for load distribution, AWS App Mesh for service mesh or AWS Fargate for serverless compute with EKS, your future is certain but not yours to control. It’s little wonder that many Amazon EKS customers look to Rancher to help them deliver a truly multi-cloud strategy for Kubernetes.

The Benefits of a Truly Multi-Cloud Strategy for Kubernetes

As discussed previously, multi-cloud has become the “new normal” of enterprise IT. But what does “multi-cloud” mean to you? Does it mean supporting the same vendor-specific Kubernetes distribution on multiple clouds? Wouldn’t that just swap out one vendor monoculture for another? Or does it mean choosing an open source management control plane that treats any CNCF-certified Kubernetes distribution as a first-class citizen, enabling true application portability across multiple providers with zero lock-in?

Don’t get me wrong – there are use cases where a decision-maker will see placing all their Kubernetes business with a single vendor as the path of least resistance. However, the desire for short-term convenience shouldn’t blind you to the inherent risks of locking yourself into a long-term relationship with just one provider. Given how far the Kubernetes ecosystem has come in the past six months, are you sure that you want to put down all your chips on red?

As with any investment, the prudent money should always go on the choice that gives you the most value without losing control. Given this, we enthusiastically encourage you to continue using EKS – it’s a great platform with a vast ecosystem. But remember to keep your options open – particularly if you’re thinking about deploying Kubernetes clusters as close as possible to where they’re delivering the most customer value – at the edge.

Kubernetes on AWS: Using Rancher to Manage Containers on EKS

If you’re going to manage Kubernetes clusters on multiple substrates – whether on AKS/GKE, on-prem or at the edge – Rancher enhances your container orchestration with EKS. With Rancher’s integrated workload management capabilities, you can allow users to centrally configure policies across their clusters and ensure consistent access. These capabilities include:

1) Role-based access control and centralized user authentication
Rancher enforces consistent role-based access control (RBAC) policies on EKS and any other Kubernetes environment by integrating with Active Directory, LDAP or SAML-based authentication. Centralized RBAC reduces the administrative overhead of maintaining user or group profiles across multiple platforms. RBAC also makes it easier for admins to meet compliance requirements and delegate administration of any Kubernetes cluster or namespace.

RBAC Controls in Rancher
RBAC Controls in Rancher

2) One intuitive user interface for comprehensive control
DevOps teams can deploy and troubleshoot workloads consistently across any provider using Rancher’s intuitive web UI. If you’ve got team members new to Kubernetes, they can quickly learn to launch applications and wire them together at production level in EKS and elsewhere with Rancher. Your team members don’t need to know everything about a specific Kubernetes distribution or infrastructure provider to be productive.

Multi-cluster management with Rancher
Multi-cluster management with Rancher

3) Enhanced cluster security
Rancher admins and their security teams can centrally define how users should interact with Kubernetes and how containerized workloads should operate across all their infrastructures, including EKS. Once defined, these policies can be instantly assigned any Kubernetes cluster.

Adding customer pod security policies
Adding customer pod security policies

4) Global application catalog & multi-cluster apps
Rancher provides access to a global catalog of applications that work across multiple Kubernetes clusters, whatever their location. For enterprises running in a multi-cloud Kubernetes environment, Rancher reduces the load on operations teams while increasing productivity and reliability.

Selecting multi-cluster apps from Rancher's catalog
Selecting multi-cluster apps from Rancher’s catalog

5) Streamlined day-2 operations for multi-cloud infrastructure
Using Rancher to provision your Kubernetes clusters in a multi-cloud environment means your day-2 operations are centralized in a single pane of glass. Benefits to centralizing your operations include one-touch deployment of service mesh (upstream Istio), logging (Fluentd), observability (Prometheus and Grafana) and highly available persistent storage (Longhorn).

What’s more, if you ever decide to stop using Rancher, we provide a clean uninstall process for imported EKS clusters so that you can manage them independently. You’ll never know Rancher was there.

Next Steps

See how Rancher can help you run containers in AWS and enhance your multi-cloud Kubernetes strategy. Download the free whitepaper, A Guide to Kubernetes with Rancher.

READ OUR FREE WHITE PAPER:
How to Build an Enterprise Kubernetes Strategy

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Migrate Your Windows 2003 Applications to Kubernetes

Tuesday, 18 February, 2020

Introduction

There’s no one-size-fits-all migration path for moving legacy Windows applications to the cloud. These applications typically reside on either physical servers, virtual machines or on premises. While the goal is generally to rearchitect or redesign an application to leverage cloud-native services, it’s not always the answer. Re-architecting an existing application to a microservice architecture or to cloud native presents several challenges in terms of cost, complexity and application dependencies.

While there are major benefits to modernizing your application, many organizations still have existing services running on Windows 2003 Servers. Microsoft’s support withdrawal for Windows 2003 presents several challenges. For one, it’s forcing decisions about what to do with said application — especially given that Windows 2008 end of life isn’t far off.

Organizations want to move to a modern architecture to gain increased flexibility, security and availability in their applications. This is where containers provide the flexibility to modernize the applications and move it to cloud-native services. In this article, we’ll focus on applications that can move to containers – typically .Net, web, SQL and other applications that don’t have a dependency to run only on Windows 2003. You can move these applications to containers without code changes, making them portable for the future. And you’ll get the benefit of running the containers on Kubernetes, which provides orchestration, availability, increased resiliency and density.

Note: not all applications or services can run in containers. There are still core dependencies for some applications which will need to be addressed, such as database and storage requirements. In addition, the business needs to decide on the ongoing life of the application.

Business Benefits of Moving to Kubernetes

There are some key business reasons for moving these applications to containers, including:

  • Return on Investment
  • Portability of older web-based services
  • Increased application security
  • Time for the business to re-evaluate existing applications

Now that Kubernetes supports Windows worker nodes, you can migrate legacy Windows applications to a modern architecture. Windows workers and Linux workers can co-exist within the same Kubernetes platform, allowing operations teams to use a common set of tools, practices and procedures.

Step 1: Analyse Your Move From Windows to Kubernetes

Migrating a legacy Windows application to Kubernetes requires a significant amount of analysis and planning. However, some key practices are emerging. These include:

  • Break down the application to its original form to understand what components are running, how they are running and their dependencies
  • Discover what services the application provides and what calls it makes in terms of data, network and interlacing
  • Decouple the data layer from the application
  • Determine and map service dependencies
  • Test, test and test again

Step 2: Plan Your Move from Windows to Kubernetes

Migrating your Windows application to a containerized .Net-based platform is a multi-step process that requires some key decisions. The following high-level process provides some guidance on requirments to migrate legacy Windows systems to run on Kubernetes.

  • Determine what operating system your container needs — either Server Core or Nano Server. The application’s dependencies will dictate this choice.
  • Follow compatibility guidelines. Running Windows containers adds strict compatibility rules for the OS version of the host and the base image the container is running. They must run Windows 2019 because the container and the underlying host share a single kernel. Currently, (at the time of writing this article) only Server Process Isolation is supported. However, Hyper-V isolation is expected soon (timing unknown), which will assist in compatibility between the host and the container.
  • Package your legacy application
  • Build out your initial Docker-based container with the application package
  • Deploy a new Docker container to a repository of your choice
  • Leverage existing DevOps toolsets (CI/CD build and release pipelines)
  • Deploy the new Windows Application to your Windows-supported Kubernetes environment
  • Test, test and test again

Key Outcomes of Moving Windows Applications to Kubernetes

By moving from Windows to Kubernetes, your legacy applications will share the benefits of your existing container-based applications. In addition, your Windows applications will benefit from the Kubernetes platform itself. What’s more, they can use additional tools and systems within the Kubernetes ecosystem, including security, service mesh, monitoring/alerting, etc.

Together, these benefits put you in a good position to make key decisions about your applications and develop a business use case. For applications that can’t be migrated, you still need to decide what to do with them, given the lack of support for the underlying Operating System. Since no further patches or security remediations available, your organizations is exposed to vulnerabilities and exploits. So the time to act is now.

Key Takeaways for Migrating from Windows to Kubernetes

  • Container-based solutions provide cost savings.
  • Containers reduce dependencies and provide portability for applications.
  • While Docker is the de facto standard for running Containers, Kubernetes is the de facto container orchestration engine.
  • Kubernetes can host scalable, reliable and resilient Windows Containers–based applications alongside Linux-based applications.
  • Organizations running a Kubernetes platform can integrate the legacy applications into their DevOps culture and toolsets.
  • Leveraging native and ecosystem-based tools for Kubernetes increases security and adds extra layers of protection for legacy applications

More Kubernetes Resources

Want to learn more about strategy and planning a Kubernetes move? Get our white paper: How to Build an Enterprise Kubernetes Strategy.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

Kubernetes DevOps: A Powerful Pair

Monday, 10 February, 2020

Kubernetes has seen an incredible rise over the past few years as organizations leverage containers for complex applications, micro-services and even cloud-native applications. And with the rise of Kubernetes, DevOps has gained more traction. While they may seem very different — one is a tool and the other is a methodology — they work together to help organizations deliver fast. This article explains why Kubernetes is essential to your DevOps strategy.

Google designed Kubernetes and then released it as open source to help alleviate the problems in DevOps processes. The aim was to help with automation, deployment and agile methodologies for software integration and deployment. Kubernetes made it easier for developers to move from dev to production, making applications more portable and leverage orchestration. Developing in one platform and releasing quickly, through pipelines, to another platform showcased a level of portability that was previously difficult and cumbersome. This level of abstraction helped accelerate DevOps and application deployment.

What is DevOps?

DevOps brings typically siloed teams together – Development and IT Operations. DevOps promises to help teams work collectively and collaboratively to achieve business outcomes faster. Security is also an important part of the mix that should be included as part of the culture. With DevSecOps, three silos come together as “first-class citizens” working collaboratively to achieve the same outcome.

From a technology point of view, DevOps typically focuses on CI/CD (continuous integration and continuous delivery or continuous deployment). Here is a quick explanation:

Continuous integration: developers make constant updates to source code within a shared repository, which is then scanned and checked by an automated build, allowing teams to detect problems early.

Continuous deployment: once approved, code is released into production, resulting in many production deployments every day.

Continuous delivery: software is built and can be released at any time – but by a manual process

Quick Kubernetes Recap

As noted above, Google created Kubernetes and released a variation as open source to the general public. It is now one of the flagship products looked after by the Cloud Native Computing Foundation (CNCF). Different deployments of Kubernetes are available, including those from managed providers (AWS, Azure and GCP), Rancher RKE and others that can be built from scratch (Kubernetes the Hard Way by Kelsey Hightower).

Kubernetes allows organizations to run applications within containers in a distributed manner. It also handles scaling, resiliency and availability. Additionally, Kubernetes provides:

  • Load balancing
  • Ability to provide access to storage (persistent and non-persistent)
  • Service discovery
  • Automated rollouts, upgrades and rollbacks
  • Role-based access control (RBAC)
  • Security controls for running applications within the platform
  • Extensibility to leverage a large and growing ecosystem to support DevOps

The Kubernetes DevOps Connection

By now we can start to see a correlation between DevOps teams creating applications and running containers and needing an orchestration engine that keeps them running at scale. This is where Kubernetes and DevOps fit together. Kubernetes helps teams respond to customer demands without having to worry about the infrastructure layer – Kubernetes does this for them. The orchestration engine within Kubernetes takes over the once-manual tasks of deploying, scaling and building more resiliency into the applications; instead, it has the controls to manage this on the fly.

Kubernetes is essential for DevOps teams looking to automate, scale and build resiliency into their applications while minimizing the infrastructure burden. Letting Kubernetes manage an application’s scale and resiliency based on metrics, for example, allows developers to focus on new services instead of worrying whether the application can handle the additional requests during peak times. The following are key reasons why Kubernetes is essential to a DevOps team:

Deploy Everywhere. As noted previously, Kubernetes handles the ability to deploy an application anywhere without having to worry about the underlying infrastructure. This abstraction layer is one of the biggest advantages to running containers. Wherever deployed, the container will run the same within Kubernetes.

Infrastructure and Configuration as Code. Everything within Kubernetes is “as-code,” ensuring that both the infrastructure layer and the application are all declarative, portable and stored in a source repository. By running “as-code,” the environment is automatically maintained and controlled.

Hybrid. Kubernetes can run anywhere – whether on-premises, in the cloud, on the edge. It’s your choice. So, you’re not locked in to either an on-premises deployment or a cloud-managed deployment. You can have it all.

Open Standards. Kubernetes follows open-source standards, which increases your flexibility to leverage an ever-growing ecosystem of innovative services, tools and products.

Deployments with No Downtime. Since applications and services get continuously deployed during the day, Kubernetes leverages different deployment strategies. This reduces the impact on existing users while giving developers the ability to test in production (phased approach or blue-green deployments). Kubernetes also has a rollback capability – should that be necessary.

Immutability. This is one of the key characteristics of Kubernetes. The oft-used analogy, “cattle, not pets,” means that containers can (and should) be able to be stopped, redeployed and restarted on the fly with minimal impact (naturally, there will be an impact on the service the container is operating).

Conclusion: Kubernetes + DevOps = A Good Match

As you can see, the relationship between the culture of DevOps and the container orchestration tool Kubernetes is a powerful one. Kubernetes provides the mechanisms and the ecosystem for organizations to deploy applications and services to customers quickly. It also means that teams don’t have to build resiliency, scale, etc. into the application – they can trust that Kubernetes services will take care of that for them. The next phase is to integrate the large ecosystem surrounding Kubernetes (see the CNCF ecosystem landscape), thus, building a platform that is highly secure, available and flexible to allow organizations to serve their customers faster, more reliably and at greater scale.

More Resources

Read the white paper: How to Build an Enterprise Kubernetes Strategy.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed