Enabling More Effective Kubernetes Troubleshooting on Rancher

Thursday, 16 April, 2020

As a leading, open-source multi-cluster orchestration platform, Rancher lets operations teams deploy, manage and secure enterprise Kubernetes. Rancher also gives users a set of container network interface (CNI) options to choose from, including open source Project Calico. Calico provides native Layer 3 routing capability for Kubernetes pods, which simplifies the networking architecture, increases networking performance and provides a rich network policy model makes it easy to lock down communication so the only traffic that flows is the traffic you want to flow.

A common challenge in deploying Kubernetes is gaining the necessary visibility into the cluster environment to effectively monitor and troubleshoot networking and security issues. Visibility and troubleshooting is one of the top three Kubernetes use cases that we see at Tigera. It’s especially critical in production deployments because downtime is expensive and distributed applications are extremely hard to troubleshoot. If you’re with the platform team, you’re under pressure to meet SLAs. If you’re on the DevOps team, you have production workloads you need to launch. For both teams, the common goal is to resolve the problem as quickly as possible.

Why Troubleshooting Kubernetes is Challenging

Since Kubernetes workloads are extremely dynamic, connectivity issues are difficult to resolve. Conventional network monitoring tools were designed for static environments. They don’t understand Kubernetes context and are not effective when applied to Kubernetes. Without Kubernetes-specific diagnostic tools, troubleshooting for platform teams is an exercise in frustration. For example, when a pod-to-pod connection is denied, it’s nearly impossible to identify which network security policy denied the traffic. You can manually log in to nodes and review system logs, but this is neither practical nor scalable.

You’ll need a way to quickly pinpoint the source of any connectivity or security issue. Or better yet, gain insight to avoid issues in the first place. As Kubernetes deployments scale up, the limitations around visibility, monitoring and logging can result in undiagnosed system failures that cause service interruptions and impact customer satisfaction and your business.

Flow Logs and Flow Visualization

For Rancher users who are running production environments, Calico Enterprise network flow logs provide a strong foundation for troubleshooting Kubernetes networking and security issues. For example, flow logs can be used to run queries to analyze all traffic from a given namespace or workload label. But to effectively troubleshoot your Kubernetes environment, you’ll need flow logs with Kubernetes-specific data like pod, label and namespace, and which policies accepted or denied the connection.

Calico Enterprise Flow Visualizer
Calico Enterprise Flow Visualizer

A large proportion of Rancher users are DevOps teams. While ITOps has traditionally managed network and security policy, we see DevOps teams looking for solutions that enable self-sufficiency and accelerate the CI/CD pipeline. For Rancher users who are running production environments, Calico Enterprise includes a Flow Visualizer, a powerful tool that simplifies connectivity troubleshooting. It’s a more intuitive way to interact with and drill down into network flows. DevOps can use this tool for troubleshooting and policy creation, while ITOps can establish a policy hierarchy using RBAC to implement guardrails so DevOps teams don’t override any enterprisewide policies.

Firewalls Can Create a Visibility Void for Security Teams

Kubernetes workloads make heavy use of the network and generate a lot of east/west traffic. If you are deploying a conventional firewall within your Kubernetes architecture, you will lose all visibility into this traffic and the ability to troubleshoot. Firewalls don’t have the context required to understand Kubernetes traffic (namespace, pod, labels, container id, etc.). This makes it impossible to troubleshoot networking issues, perform forensic analysis or report on security controls for compliance.

To get the visibility they need, Rancher users can deploy Calico Enterprise to translate zone-based firewall rules into Kubernetes network policies that segment the cluster into zones and apply the correct firewall rules. Your existing firewalls and firewall managers can then be used to define zones and create rules in Kubernetes the same way all other rules have been created. Traffic crossing zones can be sent to the Security team’s security information and event management (SIEM), providing them with the same visibility for troubleshooting purposes that they would have received using their conventional firewall.

Other Kubernetes Troubleshooting Considerations

For Platform, Networking, DevOps and Security teams using the Rancher platform, Tigera provides additional visibility and monitoring tools that facilitate faster troubleshooting:

  • The ability to add thresholds and alarms to all of your monitored data. For example, a spike in denied traffic triggers an alarm to your DevOps team or Security Operations Center (SOC) for further investigation.
  • Filters that enable you to drill down by namespace, pod and view status (such as allowed or denied traffic)
  • The ability to store logs in an EFK (Elasticsearch, Fluentd and Kibana) stack for future accessibility

Whether you are in the early stages of your Kubernetes journey and simply want to understand the “why” of unexpected cluster behavior, or you are in large-scale production with revenue-generating workloads, having the right tools to effectively troubleshoot will help you avoid downtime and service disruption. During the upcoming Master Class, we’ll share troubleshooting tips and demonstrate some of the tools covered in this blog, including flow logs and Flow Visualizer.

Join our free Master Class: Enabling More Effective Kubernetes Troubleshooting on Rancher on May 7 at 1pm PT.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Privacy Protections, PCI Compliance and Vulnerability Management for Kubernetes

Wednesday, 8 April, 2020

Containers are becoming the new computing standard for many businesses. New technology does not protect you from traditional security concerns. If your containers handle any sensitive data, including personally identifiable information (PII), credit cards or accounts, you’ll need to take a ‘defense in depth’ approach to container security. The CI/CD pipeline is vulnerable at every stage, from build to ship to runtime.

In this article, we’ll look at best practices for protecting sensitive data and enforcing compliance, from vulnerability management to network segmentation. We’ll also discuss how NeuVector simplifies security, privacy and compliance throughout the container lifecycle for organizations using Rancher’s kubernetes management platform.

Shift-Left Security

The DevOps movement is all about shifting left, and security is no different. The more security we can build in earlier in the process, the better for developers and the security team. The concept of security policy as code puts more control into developers hands while ensuring compliance with security mandates. Best practices include:

Comprehensive vulnerability management

Vulnerability detection and management throughout the CI/CD pipeline is essential. In order to prevent vulnerabilities from being introduced into registries, organizations should create policy-based build success/failure criteria. As a further safeguard, they should monitor and auto-scan all major registries such as AWS Elastic Container Registry, Docker, Azure Container Registry (ACR) and jFrog Artifactory. And finally, they should automatically scan running containers and host OSes for vulnerabilities to prevent exploits and other attacks on critical business data. With an auto-scanning infrastructure in place, containers can be auto-quarantined based on a vulnerability criteria.

Recommendation:

  • Scan the Rancher OS (or other OS)
  • Integrate and automate scanning with Jenkins plug-in or other build-phase scanning extensions, plus registry scanning
  • Employ admission control to prevent deployment of vulnerable images
  • Scan running containers and hosts for vulnerabilities, preventing ‘back-door’ vulnerable images
  • Protect running containers from vulnerability exploits with ‘virtual patching’ or other security controls to prevent unauthorized network or container behavior.

Adherence to the Center for Internet Security Benchmarks for Kubernetes and Docker

The CIS benchmarks provide strong security auditing for container, orchestrator and host configurations to ensure that proper security controls are not overlooked or disabled. These checks should be run before containers are put into production, and continuously run after deployment, as updates and restarts can often change such critical configurations. Patching, updating and restarting hosts can also inadvertently open security holes that were previously locked down.

Recommendation:

  • Use CIS Scan in Rancher 2.4 to run CIS benchmarks for Rancher managed Kubernetes clusters and the containers running on them.
  • Augment CIS benchmarks with any customized auditing or compliance checks on hosts or containers which are required by your organization.

Privacy

Privacy is a critical component of many compliance standards. However, container environments raise PCI Data Security Standard (DSS) – and likely GDPR and HIPAA – compliance challenges in the areas of monitoring, establishing security controls and limiting the scope of the Cardholder Data Environment (CDE) with network segmentation. Due to the ephemeral nature of containers – spinning up and down quickly and dynamically, and often only existing for several minutes – monitoring and security solutions must be active in real-time and able to automatically respond to rapidly transforming attacks.

Because most container traffic is internal communication between containers, traditional firewalls and security systems designed to vet external traffic are blind to nefarious threats that may escalate within the container environment. And the use of containers can increase the CDE, requiring critical protections to the size of the entire microservices environment unless limited by a container firewall able to fully visualize and tightly control its scope.

Recommendation:

  • Inspect network connections from containers within and exiting the Rancher cluster for unencrypted credit card or Personally Identifiable Information (PII) data using network DLP
  • Provide the required network segmentation for in-scope (CDE) traffic for application containers deployed by and run on Rancher

Compliance (PCI, GDPR, HIPAA and More)

Containers and microservices are inherently supportive of PCI DSS compliance across several fronts. In an ideal microservices architecture, each service and container delivers a single function, which is congruent with the PCI DSS requirement to implement only a single primary function with each server. In the same way, containers provide narrow functionality by design, meeting the PCI DSS mandate to enable only necessary protocols and services.

One might think that physically separate container environments that are in-scope would resolve issues, but this can severely restrict modern automated DevOps CI/CD pipelines and result in slower release cycles and underused resources. However, cloud-native container firewalls are emerging which provide the required network segmentation without the sacrifice of the business benefits of containers.

Recommendation:

  • Deploy a cloud-native firewall to automate network segmentation required by compliance standards such as PCI.
  • Maintain forensic data, logs and notifications for security events and other changes.

How NeuVector Enhances Rancher Security

NeuVector extends Rancher’s capabilities to support and enforce PCI-DSS, GDPR and HIPAA compliance requirements by auditing, monitoring and securing production deployments built on Rancher including:

  • Providing a comprehensive vulnerability management platform integrated with Rancher admission controls and run-time visibility.
  • Enforcing network segmentation based on layer 7 application protocols, so that no unauthorized connections are allowed in or out of containers.
  • Enforcing that encrypted SSL connections are used for transmitting sensitive data between containers and for ingress/egress connections.
  • Monitoring all unencrypted connections for sensitive data and either alerting or blocking when detected.

The NeuVector container security platform is an end-to-end solution for securing the entire container pipeline from build to ship to run-time. The industry’s first container firewall provides the critical function to perform automated network segmentation and container DLP by inspecting all container connections for sensitive data such as credit cards, PII and financial data. The screen shot below shows an example of unencrypted credit card data being transmitted between pods, as well as to an external destination.

Image 1

A container firewall solution provides network segmentation, network monitoring and encryption verification – meeting regulatory compliance requirements. PCI-DSS requires network segmentation as well as encryption for in-scope CDE environments. The NeuVector container firewall provides the required network segmentation of CDE workloads, while at the same time monitoring for unencrypted cardholder data which would violate the compliance requirements. The violations can be the first indications of a data breach, a misconfiguration of an application container or an innocent mistake made by a customer support person pasting in credit card data into a case.

Next Steps for Securing Your Container Infrastructure

For organizations transitioning to container infrastructure, it is important to recognize that security is important throughout the lifecycle of the container. Compliance and privacy regulations require protection of customer’s information wherever it resides on the organization’s network.

In this article, we looked at some of the ways that you can protect sensitive data and enforce compliance in your container infrastructure. To learn more, join us for our free Master Class: How to Automate Privacy Protections, PCI Compliance and Vulnerability Management for Kubernetes on May 5.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Transforming Telematics with Kubernetes and Rancher

Wednesday, 11 March, 2020

“As we extend our leadership position in Europe, it’s never been more important to put containers at the heart of our growth strategy. The flexibility and scale that Rancher brings is the obvious solution for high-growth companies like ours.” – Thomas Ornell, IT Infrastructure Engineer, ABAX

Norwegian leader in fleet management, equipment and vehicle tracking, ABAX is one of Europe’s fastest-growing technology businesses. The company provides sophisticated fleet tracking, electronic mileage logs and equipment and vehicle control systems to more than 26,500 customers. ABAX manages over 250,000 active subscriptions that connect a variety of vehicles and industrial equipment subscriptions.

The team recently signed an international deal with Hitachi to provide operational monitoring in Hitachi heavy machinery to help owners access operational data. ABAX saves customers millions of dollars every year by preventing the loss and theft of valuable machinery and equipment through granular monitoring of corporate fleet performance.

Thomas Ornell, an IT infrastructure engineer, has been priming ABAX’ infrastructure for significant growth over the past couple of years. Ornell and his team have transformed the company’s innovation strategy, putting containers — and Rancher — at the heart of bold expansion plans. Read our case study to find out how, with Rancher, ABAX is reducing testing time by 75 percent and recovery time by 90 percent.

Looking at how to get the most out of your Kubernetes deployments? Download our White Paper, How to Build an Enterprise Kubernetes Strategy.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Running Containers in AWS with Rancher

Tuesday, 10 March, 2020

READ OUR FREE WHITE PAPER:
How to Build an Enterprise Kubernetes Strategy

This blog will examine how Rancher improves the life of DevOps teams already invested in AWS’s Elastic Kubernetes Service (EKS) but looking to run workloads on-prem, with other cloud providers or, increasingly, at the edge. By reading this blog you will also discover how Rancher helps you escape the undeniable attractions of a vendor monoculture while lowering costs and mitigating risk.

AWS is the world’s largest cloud provider, with over a million customers and $7.3 billion in 2018 operating income. Our friends at StackRox recently showed that AWS still commands 78 percent market share despite the aggressive growth of rivals Microsoft Azure and Google Cloud Platform.

However, if you choose only AWS services for all your Kubernetes needs, you’re effectively locking yourself into a single vendor ecosystem. For example, by choosing Elastic Load Balancing for load distribution, AWS App Mesh for service mesh or AWS Fargate for serverless compute with EKS, your future is certain but not yours to control. It’s little wonder that many Amazon EKS customers look to Rancher to help them deliver a truly multi-cloud strategy for Kubernetes.

The Benefits of a Truly Multi-Cloud Strategy for Kubernetes

As discussed previously, multi-cloud has become the “new normal” of enterprise IT. But what does “multi-cloud” mean to you? Does it mean supporting the same vendor-specific Kubernetes distribution on multiple clouds? Wouldn’t that just swap out one vendor monoculture for another? Or does it mean choosing an open source management control plane that treats any CNCF-certified Kubernetes distribution as a first-class citizen, enabling true application portability across multiple providers with zero lock-in?

Don’t get me wrong – there are use cases where a decision-maker will see placing all their Kubernetes business with a single vendor as the path of least resistance. However, the desire for short-term convenience shouldn’t blind you to the inherent risks of locking yourself into a long-term relationship with just one provider. Given how far the Kubernetes ecosystem has come in the past six months, are you sure that you want to put down all your chips on red?

As with any investment, the prudent money should always go on the choice that gives you the most value without losing control. Given this, we enthusiastically encourage you to continue using EKS – it’s a great platform with a vast ecosystem. But remember to keep your options open – particularly if you’re thinking about deploying Kubernetes clusters as close as possible to where they’re delivering the most customer value – at the edge.

Kubernetes on AWS: Using Rancher to Manage Containers on EKS

If you’re going to manage Kubernetes clusters on multiple substrates – whether on AKS/GKE, on-prem or at the edge – Rancher enhances your container orchestration with EKS. With Rancher’s integrated workload management capabilities, you can allow users to centrally configure policies across their clusters and ensure consistent access. These capabilities include:

1) Role-based access control and centralized user authentication
Rancher enforces consistent role-based access control (RBAC) policies on EKS and any other Kubernetes environment by integrating with Active Directory, LDAP or SAML-based authentication. Centralized RBAC reduces the administrative overhead of maintaining user or group profiles across multiple platforms. RBAC also makes it easier for admins to meet compliance requirements and delegate administration of any Kubernetes cluster or namespace.

RBAC Controls in Rancher
RBAC Controls in Rancher

2) One intuitive user interface for comprehensive control
DevOps teams can deploy and troubleshoot workloads consistently across any provider using Rancher’s intuitive web UI. If you’ve got team members new to Kubernetes, they can quickly learn to launch applications and wire them together at production level in EKS and elsewhere with Rancher. Your team members don’t need to know everything about a specific Kubernetes distribution or infrastructure provider to be productive.

Multi-cluster management with Rancher
Multi-cluster management with Rancher

3) Enhanced cluster security
Rancher admins and their security teams can centrally define how users should interact with Kubernetes and how containerized workloads should operate across all their infrastructures, including EKS. Once defined, these policies can be instantly assigned any Kubernetes cluster.

Adding customer pod security policies
Adding customer pod security policies

4) Global application catalog & multi-cluster apps
Rancher provides access to a global catalog of applications that work across multiple Kubernetes clusters, whatever their location. For enterprises running in a multi-cloud Kubernetes environment, Rancher reduces the load on operations teams while increasing productivity and reliability.

Selecting multi-cluster apps from Rancher's catalog
Selecting multi-cluster apps from Rancher’s catalog

5) Streamlined day-2 operations for multi-cloud infrastructure
Using Rancher to provision your Kubernetes clusters in a multi-cloud environment means your day-2 operations are centralized in a single pane of glass. Benefits to centralizing your operations include one-touch deployment of service mesh (upstream Istio), logging (Fluentd), observability (Prometheus and Grafana) and highly available persistent storage (Longhorn).

What’s more, if you ever decide to stop using Rancher, we provide a clean uninstall process for imported EKS clusters so that you can manage them independently. You’ll never know Rancher was there.

Next Steps

See how Rancher can help you run containers in AWS and enhance your multi-cloud Kubernetes strategy. Download the free whitepaper, A Guide to Kubernetes with Rancher.

READ OUR FREE WHITE PAPER:
How to Build an Enterprise Kubernetes Strategy

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Migrate Your Windows 2003 Applications to Kubernetes

Tuesday, 18 February, 2020

Introduction

There’s no one-size-fits-all migration path for moving legacy Windows applications to the cloud. These applications typically reside on either physical servers, virtual machines or on premises. While the goal is generally to rearchitect or redesign an application to leverage cloud-native services, it’s not always the answer. Re-architecting an existing application to a microservice architecture or to cloud native presents several challenges in terms of cost, complexity and application dependencies.

While there are major benefits to modernizing your application, many organizations still have existing services running on Windows 2003 Servers. Microsoft’s support withdrawal for Windows 2003 presents several challenges. For one, it’s forcing decisions about what to do with said application — especially given that Windows 2008 end of life isn’t far off.

Organizations want to move to a modern architecture to gain increased flexibility, security and availability in their applications. This is where containers provide the flexibility to modernize the applications and move it to cloud-native services. In this article, we’ll focus on applications that can move to containers – typically .Net, web, SQL and other applications that don’t have a dependency to run only on Windows 2003. You can move these applications to containers without code changes, making them portable for the future. And you’ll get the benefit of running the containers on Kubernetes, which provides orchestration, availability, increased resiliency and density.

Note: not all applications or services can run in containers. There are still core dependencies for some applications which will need to be addressed, such as database and storage requirements. In addition, the business needs to decide on the ongoing life of the application.

Business Benefits of Moving to Kubernetes

There are some key business reasons for moving these applications to containers, including:

  • Return on Investment
  • Portability of older web-based services
  • Increased application security
  • Time for the business to re-evaluate existing applications

Now that Kubernetes supports Windows worker nodes, you can migrate legacy Windows applications to a modern architecture. Windows workers and Linux workers can co-exist within the same Kubernetes platform, allowing operations teams to use a common set of tools, practices and procedures.

Step 1: Analyse Your Move From Windows to Kubernetes

Migrating a legacy Windows application to Kubernetes requires a significant amount of analysis and planning. However, some key practices are emerging. These include:

  • Break down the application to its original form to understand what components are running, how they are running and their dependencies
  • Discover what services the application provides and what calls it makes in terms of data, network and interlacing
  • Decouple the data layer from the application
  • Determine and map service dependencies
  • Test, test and test again

Step 2: Plan Your Move from Windows to Kubernetes

Migrating your Windows application to a containerized .Net-based platform is a multi-step process that requires some key decisions. The following high-level process provides some guidance on requirments to migrate legacy Windows systems to run on Kubernetes.

  • Determine what operating system your container needs — either Server Core or Nano Server. The application’s dependencies will dictate this choice.
  • Follow compatibility guidelines. Running Windows containers adds strict compatibility rules for the OS version of the host and the base image the container is running. They must run Windows 2019 because the container and the underlying host share a single kernel. Currently, (at the time of writing this article) only Server Process Isolation is supported. However, Hyper-V isolation is expected soon (timing unknown), which will assist in compatibility between the host and the container.
  • Package your legacy application
  • Build out your initial Docker-based container with the application package
  • Deploy a new Docker container to a repository of your choice
  • Leverage existing DevOps toolsets (CI/CD build and release pipelines)
  • Deploy the new Windows Application to your Windows-supported Kubernetes environment
  • Test, test and test again

Key Outcomes of Moving Windows Applications to Kubernetes

By moving from Windows to Kubernetes, your legacy applications will share the benefits of your existing container-based applications. In addition, your Windows applications will benefit from the Kubernetes platform itself. What’s more, they can use additional tools and systems within the Kubernetes ecosystem, including security, service mesh, monitoring/alerting, etc.

Together, these benefits put you in a good position to make key decisions about your applications and develop a business use case. For applications that can’t be migrated, you still need to decide what to do with them, given the lack of support for the underlying Operating System. Since no further patches or security remediations available, your organizations is exposed to vulnerabilities and exploits. So the time to act is now.

Key Takeaways for Migrating from Windows to Kubernetes

  • Container-based solutions provide cost savings.
  • Containers reduce dependencies and provide portability for applications.
  • While Docker is the de facto standard for running Containers, Kubernetes is the de facto container orchestration engine.
  • Kubernetes can host scalable, reliable and resilient Windows Containers–based applications alongside Linux-based applications.
  • Organizations running a Kubernetes platform can integrate the legacy applications into their DevOps culture and toolsets.
  • Leveraging native and ecosystem-based tools for Kubernetes increases security and adds extra layers of protection for legacy applications

More Kubernetes Resources

Want to learn more about strategy and planning a Kubernetes move? Get our white paper: How to Build an Enterprise Kubernetes Strategy.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

Kubernetes DevOps: A Powerful Pair

Monday, 10 February, 2020

Kubernetes has seen an incredible rise over the past few years as organizations leverage containers for complex applications, micro-services and even cloud-native applications. And with the rise of Kubernetes, DevOps has gained more traction. While they may seem very different — one is a tool and the other is a methodology — they work together to help organizations deliver fast. This article explains why Kubernetes is essential to your DevOps strategy.

Google designed Kubernetes and then released it as open source to help alleviate the problems in DevOps processes. The aim was to help with automation, deployment and agile methodologies for software integration and deployment. Kubernetes made it easier for developers to move from dev to production, making applications more portable and leverage orchestration. Developing in one platform and releasing quickly, through pipelines, to another platform showcased a level of portability that was previously difficult and cumbersome. This level of abstraction helped accelerate DevOps and application deployment.

What is DevOps?

DevOps brings typically siloed teams together – Development and IT Operations. DevOps promises to help teams work collectively and collaboratively to achieve business outcomes faster. Security is also an important part of the mix that should be included as part of the culture. With DevSecOps, three silos come together as “first-class citizens” working collaboratively to achieve the same outcome.

From a technology point of view, DevOps typically focuses on CI/CD (continuous integration and continuous delivery or continuous deployment). Here is a quick explanation:

Continuous integration: developers make constant updates to source code within a shared repository, which is then scanned and checked by an automated build, allowing teams to detect problems early.

Continuous deployment: once approved, code is released into production, resulting in many production deployments every day.

Continuous delivery: software is built and can be released at any time – but by a manual process

Quick Kubernetes Recap

As noted above, Google created Kubernetes and released a variation as open source to the general public. It is now one of the flagship products looked after by the Cloud Native Computing Foundation (CNCF). Different deployments of Kubernetes are available, including those from managed providers (AWS, Azure and GCP), Rancher RKE and others that can be built from scratch (Kubernetes the Hard Way by Kelsey Hightower).

Kubernetes allows organizations to run applications within containers in a distributed manner. It also handles scaling, resiliency and availability. Additionally, Kubernetes provides:

  • Load balancing
  • Ability to provide access to storage (persistent and non-persistent)
  • Service discovery
  • Automated rollouts, upgrades and rollbacks
  • Role-based access control (RBAC)
  • Security controls for running applications within the platform
  • Extensibility to leverage a large and growing ecosystem to support DevOps

The Kubernetes DevOps Connection

By now we can start to see a correlation between DevOps teams creating applications and running containers and needing an orchestration engine that keeps them running at scale. This is where Kubernetes and DevOps fit together. Kubernetes helps teams respond to customer demands without having to worry about the infrastructure layer – Kubernetes does this for them. The orchestration engine within Kubernetes takes over the once-manual tasks of deploying, scaling and building more resiliency into the applications; instead, it has the controls to manage this on the fly.

Kubernetes is essential for DevOps teams looking to automate, scale and build resiliency into their applications while minimizing the infrastructure burden. Letting Kubernetes manage an application’s scale and resiliency based on metrics, for example, allows developers to focus on new services instead of worrying whether the application can handle the additional requests during peak times. The following are key reasons why Kubernetes is essential to a DevOps team:

Deploy Everywhere. As noted previously, Kubernetes handles the ability to deploy an application anywhere without having to worry about the underlying infrastructure. This abstraction layer is one of the biggest advantages to running containers. Wherever deployed, the container will run the same within Kubernetes.

Infrastructure and Configuration as Code. Everything within Kubernetes is “as-code,” ensuring that both the infrastructure layer and the application are all declarative, portable and stored in a source repository. By running “as-code,” the environment is automatically maintained and controlled.

Hybrid. Kubernetes can run anywhere – whether on-premises, in the cloud, on the edge. It’s your choice. So, you’re not locked in to either an on-premises deployment or a cloud-managed deployment. You can have it all.

Open Standards. Kubernetes follows open-source standards, which increases your flexibility to leverage an ever-growing ecosystem of innovative services, tools and products.

Deployments with No Downtime. Since applications and services get continuously deployed during the day, Kubernetes leverages different deployment strategies. This reduces the impact on existing users while giving developers the ability to test in production (phased approach or blue-green deployments). Kubernetes also has a rollback capability – should that be necessary.

Immutability. This is one of the key characteristics of Kubernetes. The oft-used analogy, “cattle, not pets,” means that containers can (and should) be able to be stopped, redeployed and restarted on the fly with minimal impact (naturally, there will be an impact on the service the container is operating).

Conclusion: Kubernetes + DevOps = A Good Match

As you can see, the relationship between the culture of DevOps and the container orchestration tool Kubernetes is a powerful one. Kubernetes provides the mechanisms and the ecosystem for organizations to deploy applications and services to customers quickly. It also means that teams don’t have to build resiliency, scale, etc. into the application – they can trust that Kubernetes services will take care of that for them. The next phase is to integrate the large ecosystem surrounding Kubernetes (see the CNCF ecosystem landscape), thus, building a platform that is highly secure, available and flexible to allow organizations to serve their customers faster, more reliably and at greater scale.

More Resources

Read the white paper: How to Build an Enterprise Kubernetes Strategy.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

5 lessons from the Lighthouse Roadshow in 2019

Thursday, 5 December, 2019

Having completed a series of twelve Lighthouse Roadshow events across Europe and North America over the past six months, I’ve had time to reflect on what I’ve learnt about the rapid growth of the Kubernetes ecosystem, the importance of community and my personal development.

For those of you who haven’t heard of the Lighthouse series before, Rancher Labs first ran this roadshow in 2018 with Google, GitLab and Aqua Security. The theme was ‘Building an Enterprise DevOps Strategy with Kubernetes’. After selling out six venues across North America, I felt that its success could be repeated in Europe. We tested this theory in May by running the first 2019 Lighthouse in Amsterdam with Microsoft, GitHub and Aqua Security. The event sold out in just two weeks, and we had to move to a larger venue downtown to accommodate a growing waiting list.

Bas Peters from Github presenting

Bas Peters from GitHub at Lighthouse Amsterdam – 16th May 2019

After the summer vacation period, the European leg of the Lighthouse re-started in earnest with events in Munich and Paris on consecutive days. The Paris event turned out to be the largest of the roadshow. Held at Microsoft’s magnificent Paris HQ, we packed their main auditorium with almost 300 delegates. In the weeks that followed the Lighthouse team also visited Copenhagen, London, Oslo, Helsinki, Stockholm and, finally, Dublin. Not to be outdone, Rancher’s US team organised a further three Lighthouse events with partners Amazon, GitLab, and Portworx during November.

Now home, sitting at my desk and reflecting on the lessons learnt, I’ve distilled them down to the following:

Focus on context not product pitches

Organizing the content for so many consecutive events with many different speakers was a significant challenge. We had a mix of sales guys, tech evangelists, consultants and field sales engineers presenting. Those speakers that received the best response (and exchanged the most business cards during the coffee breaks) always delivered insight into the context in which their products exist. I share this lesson because I want to encourage those running similar events in this space to understand the value of insight. This is particularly true if you work for a company that doesn’t charge anything for their technology. In a market where there are no barriers to adopting software, the only way you can genuinely differentiate is the quality of the story you tell and the expert insight that you deliver.

Alain Helaili from GitHub at Lighthouse Paris – 11th Oct 2019

Alain Helaili from GitHub at Lighthouse Paris – 11th Oct 2019

Interest in Kubernetes is exploding

Of the almost 3000 IT professionals who registered for the roadshow globally, more than half are already using Kubernetes in production. So, what makes the excitement around Kubernetes different from previous hype-cycles? I would contend there are two principal differences:

  1. Low barrier to entry – Kubernetes takes minutes to install on-prem or in the cloud. I regularly see enthusiastic sales and marketing people launching their first cluster in the public cloud. Compare that to something like OpenStack which, despite the existence of a variety of installers on the market, is hellish to get up and running. Unless you have access to skilled consultants from the beginning, the technical bar is set so high that only the most sophisticated teams can be successful.
  2. Mature and proven – Kubernetes has, in one form or another, been around for over ten years orchestrating containers in the world’s largest IT infrastructures. Google introduced the Borg around 2004. Borg was a large-scale internal cluster management system, which ran many thousands of different applications, across many clusters, each with up to tens of thousands of machines. In 2014 the company released Kubernetes as an open-source version of Borg. Since then, hundreds of thousands of enterprises have deployed Kubernetes into production with all the public clouds now offering managed varieties of their own. Google rightly concluded that a rising tide would float all ships (and use more cloud compute!). Today Kubernetes is mature, proven and used everywhere. Sadly, you can’t say the same about OpenStack.

Tom Callway from Rancher presenting

Yours truly opening proceedings at Lighthouse Munich – 10th Oct 2019

Enterprises are still asking the same questions

While the adoption of Kubernetes is undeniably the most significant phenomenon in IT operations since virtualization, those enterprises that are considering it are asking the same questions as before:
1. Who should be responsible for it?
2. How does it fit into our cloud strategy?
3. How do we tie it into our existing services?
4. How do we address security?
5. How do we encourage broader adoption?

In what is still a relatively nascent market, its challenging questions like these that need to be answered by Kubernetes advocates transparently and in person if they are to be taken seriously. The stakes are high for early adopters, and they need assurance that the advice you offer is real, tangible and trusted by others. That’s why we created the Lighthouse Roadshow.

Bas Peters from Github presenting

Olivier Maes from Rancher Labs at Lighthouse Copenhagen – 31st Oct 2019

Community matters

Unless the ecosystem around new technology is open and well-governed, it will die. Companies or individuals that reject community members as freeloaders are consigning themselves to irrelevance. You can always find some people who are willing to jump through the hoops of licensing management or lock themselves into a single vendor. Still, most of today’s B2B tech consumers are looking to make their choices based on third-party validation. Community members may not pay for your software, but they contribute to your growth by endorsing your brand and sharing their own success stories.

The Lighthouse Roadshow is 100% community driven. We’re not interested in making a profit from ticket sales preferring instead to see how well our stories resonate with delegates. The more insight delivered, the more successful the event. The feedback from each of the Lighthouse venues has been hugely rewarding and the opportunities for growth have been incalculable. We couldn’t have achieved this if we just measured our success by tracking the conversion rate of delegate numbers to MQLs and close won opportunities.

Steve Giguere from Aqua Security at Lighthouse London

Steve Giguere from Aqua Security at Lighthouse London – 8th Nov 2019

Surrounding yourself with talent makes you better

It’s widely known that one of the best ways to improve on a skill is to practice it with someone better than you. During the Lighthouse Roadshow I had the unique privilege of attending every European event and listening to every talk, sometimes multiple times. The skills and knowledge of the speakers and professionalism of the event professionals who helped us was simply amazing.

I’m particularly grateful to my fantastic colleagues at Rancher Labs – Lujan Fernandez, Abbie Lightowlers, Olivier Maes, Tolga Fatih Erdem, Jeroen Overmaat, Elimane Prud’ hom, Nick Somasundram, Simon Robinson, Chris Urwin, Sheldon Lo-A-Njoe, Jason Van Brackel, Kyle Rome and Peter Smails. I’ve also been fortunate to work alongside rockstars from partner companies like Steve Giguere, Grace Cheung and Jeff Thorne at Aqua Security; Bas Peters, Richard Erwin and Anne-Christa Strik at GitHub; and Bozena Crnomarkovic Verovic, Dennis Gassen, Shirin Mohammadi, Maxim Salnikov, Sherry List, Drazen Dodik, Tugce Coskun, Anna-Victoria Fear, Juarez Junior and many others from Microsoft; Alex Diaz and Patrick Brennan from Portworx; Carmen Puccio from Amazon; and Dan Gordon from GitLab. I can’t help but feel inspired by all these fantastic people.

By the time we finished in Dublin, I felt invigorated and filled with new ideas. Looking back, I know that listening and sharing with these brilliant folks has encouraged me to step up my own game.

More Resources

What to know more about how to build an enterprise Kubernetes Strategy? Download our eBook.

Tags: ,,,, Category: Products, Rancher Kubernetes Comments closed

Windows Containers and Rancher 2.3

Tuesday, 8 October, 2019

Container technology is transforming the face of business and application development. 70% of on-premises workloads today are running on the Windows Server operating system and enterprise customers are looking to modernize these workloads and make use of containers.

We have introduced support for Windows Containers in Windows Server 2016 and graduated support for Windows Server worker nodes in Kubernetes 1.14 clusters. With Windows Server 2019 we have expanded support in Kubernetes 1.16.

For our customers one of the preferred ways to increase the adoption of containers and Kubernetes is to work to make it easier for operators to deploy it and for developers to use it.

Towards that end Microsoft has invested in AKS and Windows Container support with this goal in mind while working with partners such as Rancher Labs who has built their organization on the principle of “Run Kubernetes Everywhere”.

With the release of Rancher 2.3, Rancher is the first to have graduated Windows support to GA and can now deploy Kubernetes clusters with Windows support from within the user experience.

Using Rancher 2.3 users can deploy Windows Kubernetes clusters in AKS, Azure Cloud, any other cloud computing provider or on-premises using the supported and proven network components in Windows Server as well as Kubernetes.

Rancher 2.3 will support Flannel as the CNI plugin and Overlay Networking with VxLAN to enable communication between Windows and Linux containers, services, and applications.

Learn more about Rancher 2.3 and its functionality.

Tags: , Category: Containers Comments closed

Introducing Rancher 2.3: The Best Gets Better

Tuesday, 8 October, 2019

Today we are excited to announce the general availability of Rancher 2.3,
the latest version of our flagship product. Rancher, already the
industry’s most widely adopted Kubernetes management platform, adds
major new features with v2.3, including:

  • Industry’s first generally available support for Windows containers, bringing the benefits of Kubernetes to Windows Server applications.
  • Introduction of cluster templates for secure, consistent deployment of clusters in large scale deployments
  • Simplified installation and configuration of Istio service mesh

These new capabilities strengthen our Run Kubernetes Everywhere strategy
by enabling an even broader range of enterprises to leverage the
transformative power of Kubernetes.

Bringing the Benefits of Kubernetes to Windows Server Applications

Today, 70% of on-premises workloads are running on the Windows Server
operating system, and in March of this year, Windows Server Container
support was built into the release of Kubernetes v1.14

Not surprisingly, Windows containers have been one of the most desired technologies within the Kubernetes ecosystem in recent years. We are proud to be partnering with Microsoft on this launch and are excited to be the first Kubernetes management platform to deliver GA support for Windows Containers and Kubernetes with Windows worker nodes! To get Microsoft’s perspective on Rancher 2.3, check out this blog from Mike Kostersitz, Principal Program Manager at Microsoft.

By bringing all the benefits of Kubernetes to Windows, Rancher 2.3 eases
complexity and provides a fast and straightforward path for modernizing
legacy Windows-based applications, regardless of whether they will run
on-premises or in a multi-cloud environment. Alternatively, Rancher 2.3
can eliminate the need to go through the process of rewriting
applications by containerizing and transforming them into efficient,
secure and portable multi-cloud applications.

Windows Workloads

Secure, Consistent Deployment of Kubernetes Clusters with Cluster Templates

With most businesses managing multiple clusters at any one time,
security is a key priority for all organizations. Cluster templates help
organizations reduce risk by enabling them to enforce consistent cluster
configurations across their entire infrastructure. Specifically, with
cluster templates:

  • Operators can create, save, and confidently reuse well-tested Kubernetes configurations across all their cluster deployments.
  • Administrators can enable configuration enforcement, thereby eliminating configuration drift or improper misconfigurations which, left unchecked, can introduce security risks as more clusters are created.

Cluster Templates

Additionally, admins can scan existing Kubernetes clusters using industry tools like CIS and NIST to identify and report on unsecure cluster settings in order to facilitate a plan for remediation.

Tighter Integration with the Leading Service Mesh Solution

A big part of Rancher’s value is its rich ecosystem catalogue of
Kubernetes services, including service mesh. Istio, the leading service
mesh, eliminates the need for developers to write specific code to enable
key Kubernetes capabilities like fault tolerance, canary rollouts,
A/B testing, monitoring and metrics, tracing and observability, and
authentication and authorization.

Rancher 2.3 delivers simplified installation and configuration of
Istio including:

  • Kiali dashboards for traffic and telemetry visualization
  • Jaeger for tracing
  • Prometheus and Grafana for observability

Istio

Rancher 2.3 also introduces support for Kubernetes v1.15.x and Docker
19.03. Getting started with Rancher v2.3 is easy. See our documentation for instructions on how to be up and running in a flash.

Our Momentum Continues

Rancher 2.3 is just the latest proof point of our momentum in 2019.
Other highlights include:

  • 161 percent year-on-year revenue growth, community growth to more than 30,000 active users, oftware downloads have surpassed 100M.
  • Rancher was named a leader in Forrester New WaveTM , Enterprise Container Platform Software Suites
  • Rancher is included in Five Gartner Hype Cycles in 2019
  • Rancher was recognized by 451 Research as a Firestarter in Q3’19

And, maybe the best part of the story is that we have more exciting news coming very soon! Stay tuned to our blog to learn more.

We also look forward to seeing everyone at KubeCon 2019 in San Diego, California. Come to booth P19 to talk with us or get a personalized demo.

Tags: , Category: Uncategorized Comments closed