What Do People Love About Rancher?

木曜日, 28 2月, 2019
Read the Guide to Kubernetes with Rancher
This guide shows the challenges in running Kubernetes in production and how Rancher helps.

More than 20,000 environments have chosen Rancher as the solution to make the Kubernetes adventure painless in as many ways as possible. More than 200 businesses across finance, health care, military, government, retail, manufacturing, and entertainment verticals engage with Rancher commercially because they recognize that Rancher simply works better than other solutions.

Why is this? Is it really about one feature set versus another feature set, or is it about the freedom and breathing room that come from having a better way?

A Tale of Two Houses

Imagine that you’re walking down a street, and each side of the street is lined with houses. The houses on one side were constructed over time by different builders, and you can see that although every house contains walls, a floor, a roof, doors, and windows, they’re all completely different. Some were built from custom plans, while others were modified over time by the owner to fit a personal need.

You see a person working on his house, and you stop to ask him about the construction. You learn that the company that built his house did so with special red bricks that only come from one place. He paid a great deal of money to import the bricks and have the house built, and he beams with pride as he tells you about it.

“It’s artesenal,” he tells you. “The company who built my house is one of the biggest companies in the world. They’ve been building houses for years, so they know what they’re doing. My house only took a month to build!”

“What if you want to expand?” You point to other houses on his side of the street. “Does the builder come out and do the work?”

“Nope! I decide what I want to build, and then I build it. I like doing it this way. Being hands-on makes me feel like I’m in control.”

Your gaze moves to the other side of the street, where the houses were built by following a different strategy. Each house has an identical core, and where an owner made customizations, each house with that customization has it constructed the same.

You see a man outside of one of the houses, relaxing on his porch and drinking tea. He waves at you, so you walk over and strike up a conversation with him.

“Can you tell me about your house?”

“My house?” He smiles at you. “Sure thing! All of the houses on this side were built by one company. They use pre-fabricated components that are built off-site, brought in and assembled. It only takes a day to build one!”

“What about adding rooms and other features?”

“It’s easy,” he replies. The company has a standard interface for rooms, terraces, and any other add-on. When I want to expand, I just call them, and they come out and connect the room. Everything is pre-wired, so it goes in and comes online almost as fast as I can think of it.”

You ask if he had to do any extra work to connect to public utilities.

“Not at all!” he exclaims. “There’s a panel inside where I can choose which provider I want to connect to. I just had to pick one. If I want to change it in the future, I make a different selection. The house lets me choose everything – lawn care service provider, window cleaner, painter, everything I need to make the house liveable and keep it running. I just go to the panel, make my choice, and then go back to living.

“And best of all, my house was free.”

Rancher Always Works For You

Rancher Labs has designed Rancher to do the heaviest tasks around building and maintaining Kubernetes clusters.

Easily Launch Secure Clusters

Let’s start with the installation. Are you installing on bare metal? Cloud instances? Hosted provider? A mix? Do you want to give others the ability to deploy their own clusters, or do you want the flexibility to use multiple providers?

Maybe you just want to use AWS, or GCP, so multiple providers isn’t a big deal. Flexibility is still important. Your requirements today might be different in a month or a year.

With Rancher you can simply fire up a new cluster in another provider and begin migrating workloads, all from within the same interface.

Global Identity and RBAC

Whether you’re using multiple providers or not, the normal way of configuring access to a single cluster in one provider requires work. Access control policies take time to configure and maintain, and generally, once provisioned, are forgotten. If using multiple providers, it’s like learning multiple languages. Russian for AWS, Swahili for Google, Flemish for Azure, Uzbek for DigitalOcean or Rackspace…and if someone leaves the organization, who knows what they had access to? Who remembers how to speak Latin?

Rancher connects to backend identity providers, and from a global configuration it applies roles and policies to all of the clusters that it manages.

When you can deploy and manage multiple clusters as easily as you can a single one, and when you can do so securely, then it’s no big deal to spin up a cluster for UAT as part of the CI/CD test suite. It’s trivial to let developers have their own cluster to work on. You could even let multiple teams can share one cluster.

Solutions for Cluster Multi-Tenancy

How do you keep people from stepping on each other?

You can use Kubernetes Namespaces, but provisioning Roles across multiple Namespaces is tedious. Rancher collects Namespaces into Projects and lets you map Roles to the Project. This creates single-cluster multi-tenancy, so now you can have multiple teams, each only able to interact with their own Namespaces, all on the same cluster. You can have a dev/staging environment built exactly like production, and then you can easily get into the CD part of CI/CD.

Tools for Day Two Operations

What about all of the add-on tools? Monitoring. Alerts. Log shipping. Pipelines. You could provision and configure all of this yourself for every cluster, but it takes time. It’s easy to do wrong. It requires skills that internal staff may not have – do you want your staff learning all of the tools above, or do you want them focusing on business initiatives that generate revenue? To put it another way, do you want to spend your day spinning copper wire to connect to the phone system, or would you rather press a button and be done with it?

Rancher ships with tools for monitoring your clusters, dashboards for visualizing metrics, an engine for generating alerts and sending notifications, a pipeline system to enable CI/CD for those not already using an external system. With a click it ships logs off to Elasticsearch, Kafka, Fluentd, Splunk, or syslog.

Designed to Grow With You

The more a Kubernetes solution scales (the bigger or more complicated that it gets), the more important it is to have fast, repeatable ways to do things. What about using scripts like Ansible, Terraform, kops, or kubespray to launch clusters? They stop once the cluster is launched. If you want more, you have to script it yourself, and this adds a dependency on an internal asset to maintain and support those scripts. We’ve all been at companies where the person with the special powers left, and everyone who stayed had to scramble to figure out how to keep everything running. Why go down that path when there’s a better way?

Rancher makes everything related to launching and managing clusters easy, repeatable, fast, and within the skill set of everyone on the team. It spins up clusters reliably in any provider in minutes, and then it gives you a standard, unified interface for interacting with those clusters via UI or CLI. You don’t need to learn each provider’s nuances. You don’t need to manage credentials in each provider. You don’t need to create configuration files to add the clusters to monitoring systems. You don’t need to do a bunch of work on the hosts before installing Kubernetes. You don’t need to go to multiple places to do different things – everything is in one place, easy to find, and easy to use.

No Vendor Lock-In

This is significant. Companies who sell you a Kubernetes solution have a vested interest in keeping you locked to their platform. You have to run their operating system or use their facilities. You can only run certain software versions or use certain components. You can only buy complementary services from vendors they partner with.

Rancher Labs believes in something different. They believe that your success comes from the freedom to choose what’s best for you. Choose the operating system that you want to use. Build your systems in the provider you like best. If you want to build in multiple providers, Rancher gives you the tools to manage them as easily as you manage one. Use any provisioner.

What Rancher accelerates is the time between your decision to do something and when that thing is up and running. Rancher gets you out the gate and onto the track faster than any other solution.

The Wolf in a DIY Costume

Those who say that they want to “go vanilla” or “DIY” are usually looking at the cost of an alternative solution. Rancher is open source and free to use, so there’s no risk in trying it out and seeing what it does. It will even uninstall cleanly if you decide not to continue with it.

If you’re new to Kubernetes or if you’re not in a hands-on, in-the-trenches role, you might not know just how much work goes into correctly building and maintaining a single Kubernetes cluster, let alone multiple clusters. If you go the “vanilla Kubernetes” route with the hope that you’ll get a better ROI, it won’t work out. You’ll pay for it somewhere else, either in staff time, additional headcount, lost opportunity, downtime, or other places where time constraints interfere with progress.

Rancher takes all of the maintenance tasks for clusters and turns them into a workflow that saves time and money while keeping everything truly enterprise-grade. It will do this for single and multi-cluster Kubernetes environments, on-premise or in the cloud, for direct use or for business units offering Kubernetes-as-a-service, all from the same installation. It will even import the Kubernetes clusters you’ve already deployed and start managing them.

Having more than 20,000 deployments in production is something that we’re proud of. Being the container management platform for mission-critical applications in over 200 companies across so many verticals also makes us proud.

What we would really like is to have you be part of our community.

Join us in showing the world that there’s a better way. Download Rancher and start living in the house you deserve.

Read the Guide to Kubernetes with Rancher
This guide shows the challenges in running Kubernetes in production and how Rancher helps.
Tags: ,,, Category: 未分類 Comments closed

Kubernetes in the Region: Observations and an Offer

火曜日, 19 2月, 2019

Find a Rodeo workshop near you
Rancher Rodeos are free, in-depth workshops where you can learn to deploy containers and Kubernetes in production.

Since joining Rancher Labs to head up the Australia, New Zealand, and Singapore region, my day revolves around discussing containers/Kubernetes use cases and adoption with many of the top enterprises, DevOps groups, and executives in the area. Not only is this a great learning experience and a fantastic way to meet people, it is also a huge eye opener into the many reasons why Kubernetes adoption is growing so rapidly and what the current challenges are. I want to quickly share some of my observations and make an offer for you to join us for some free hands-on training.

Some Observations

Everyone is Doing Something with Kubernetes

It doesn’t matter which event, meetup, or customer discussion I’m in — every enterprise is doing something with Kubernetes. It’s like the adoption of virtualization, only the discussion is slightly different. It’s not so much about which vendor or standard — Kubernetes is the focus. Instead, it’s about how to do Kubernetes and what are the associated best practices, scalable architectures, and security considerations.

Kubernetes Native, but How to Do It at Operational Scale?

The community and ecosystem around Kubernetes is growing every day, with strong capabilities, so there is a strong desire to stay on “native” Kubernetes and not get sucked down a branch, fork, or vendor-specific offshoot of Kubernetes. It seems that most enterprise and groups begin this way and get into production with Kubernetes. However, there is a clear point at which scale becomes an operational challenge and basic tools need to be supplemented or worked on to help manage multiple Kubernetes namespaces, multiple clusters, authentication, RBAC, policy, monitoring, and logging across many development teams.

It’s About Consuming Kubernetes, Not “Making” Kubernetes

Nobody wants to be in the business of creating Kubernetes snowflakes, or be in the business of allocating their resources to do work that adds no value. There is a learning curve for operationalizing Kubernetes, using Kubernetes, and deploying workloads into Kubernetes environments. Many enterprises are looking for ways to eliminate the learning curve or the need for specialized skills and instead just consume Kubernetes, using a Kubernetes-as-a-Service model. Much larger and faster gains can be made if consuming Kubernetes becomes the focus instead of making Kubernetes.

Both On-Premise and Public Cloud Kubernetes

As enterprises grow, iterate, and merge, an ever-increasing mixture of infrastructure environments and needs emerges. The same enterprise may create Kubernetes clusters using on-premise bare metal, with OpenStack and VMware-type infrastructures, as well as out on public clouds using Amazon, Google, Azure, Alibaba, and others. The portability and rapid pace of containers lends itself to these hybrid or multi-cloud scenarios (more so than VMs) and is quite quickly sprawling in this way. There is also quite an urgent need for air-gapped Kubernetes environments.

Public Cloud Kubernetes Providers

Most enterprises are now seriously looking at the Kubernetes services offered by public cloud providers, like EKS (now available in Australia & Singapore), GKE, and AKS. These are viable options and really do support some of the notions mentioned in my other observations, like consumability. Technical discussions here become much less about the Kubernetes cluster control planes and architecture, and more about integration of these clusters into enterprise management capabilities like authentication domains, security models, deployment pipelines, and multi-cloud strategies (e.g. on-premise or multiple public clouds).

Our Offer

We run free, half-day training sessions called Rancher Rodeos throughout the world. Among others, this month we have Rodeos in Sydney, Melbourne, and Singapore (registration for Singapore is not open yet). During these sessions, DevOps and IT professionals can get hands-on experience with how to quickly deploy an enterprise-ready Kubernetes environment on any infrastructure or cloud provider (or multiples of these) using Rancher. We will show how Rancher helps make enterprise Kubernetes consumable and native, with rapid results for development and infrastructure teams.

Please take us up on the offer, register here, and join us!

Find a Rodeo workshop near you
Rancher Rodeos are free, in-depth workshops where you can learn to deploy containers and Kubernetes in production.

Introduction to Kubernetes Namespaces

月曜日, 28 1月, 2019
Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

Introduction

Kubernetes clusters can manage large numbers of unrelated workloads concurrently and organizations often choose to deploy projects created by separate teams to shared clusters. Even with relatively light use, the number of deployed objects can quickly become unmanageable, slowing down operational responsiveness and increasing the chance of dangerous mistakes.

Kubernetes uses a concept called namespaces to help address the complexity of organizing objects within a cluster. Namespaces allow you to group objects together so you can filter and control them as a unit. Whether applying customized access control policies or separating all of the components for a test environment, namespaces are a powerful and flexible concept for handling objects as a group.

In this article, we’ll discuss how namespaces work, introduce a few common use cases, and cover how to use namespaces to manage your Kubernetes objects. Towards the end, we’ll also take a look at a Rancher feature called projects that builds on and extends the namespaces concept.

What are Namespaces and Why Are They Important?

Namespaces are the organizational mechanism that Kubernetes provides to categorize, filter by, and manage arbitrary groups of objects within a cluster. Each workload object added to a Kubernetes cluster must be placed within exactly one namespace.

Namespaces impart a scope for object names within a cluster. While names must be unique within a namespace, the same name can be used in different namespaces. This can have some important practical benefits for certain scenarios. For example, if you use namespaces to segment application life cycle environments — like development, staging, and production — you can maintain copies of the same objects, with the same names, in each environment.

Namespaces also allow you to easily apply policies to specific slices of your cluster. You can control resource usage by defining ResourceQuota objects, which set limits on consumption on a per-namespace basis. Similarly, when using a CNI (container network interface) that supports network policies on your cluster, like Calico or Canal (Calico for policy with flannel for networking), you can apply a NetworkPolicy to the namespace with rules that dictate how pods can be communicate with one another. Different namespaces can be given different policies.

One of the greatest benefits of using namespaces is being able to take advantage of Kubernetes RBAC (role-based access control). RBAC allows you to develop roles, which group a list of permissions or abilities, under a single name. ClusterRole objects exist to define cluster-wide usage patterns, while the Role object type is applied to a specific namespace, giving greater control and granularity. Once a Role is created, a RoleBinding can grant the defined capabilities to a specific user or group of users within the context of a single namespace. In this way, namespaces let cluster operators map the same policies to organized sets of resources.

Common Namespace Usage Patterns

Namespaces are an incredibly flexible feature that doesn’t impose a specific structure or organizational pattern. That being said, there are some common patterns that many teams find useful.

Mapping Namespaces to Teams or Projects

One convention to use when setting up namespaces is to create one for each discrete project or team. This melds well with many of the namespace characteristics we mentioned earlier.

By giving a team a dedicated namespace, you can allow self-management and autonomy by delegating certain responsibilities with RBAC policies. Adding and removing members from the namespace’s RoleBinding objects is a simple way to control access to the team’s resources. It is also often useful to set resource quotas for teams and projects. This way, you can ensure equitable access to resources based the organization’s business requirements and priorities.

Using Namespaces to Partition Life Cycle Environments

Namespaces are well suited for carving out development, staging, and production environments within cluster. While it recommended to deploy production workloads to an entirely separate cluster to ensure maximum isolation, for smaller teams and projects, namespaces can be a workable solution.

As with the previous use case, network policies, RBAC policies, and quotas are big factors in why this can be successful. The ability to isolate the network to control communication to your components is a fundamental requirement when managing environments. Likewise, namespace-scoped RBAC policies allow operators to set strict permissions for production environments. Quotas help you guarantee access to important resources for your most sensitive environments.

The ability to reuse object names is also helpful here. Objects can be rolled up to new environments as they they are tested and released while retaining their original name. This helps avoid confusion around which objects are analogous across environments and reduces cognitive overhead.

Using Namespaces to Isolate Different Consumers

Another use case that namespaces can help with is segmenting workloads by their intended consumers. For instance, if your cluster provides infrastructure for multiple customers, segmenting by namespace allows you to manage each independently while keeping track of usage for billing purposes.

Once again, namespace features allow you to control network and access policies and define quotas for your consumers. In cases where the offering is fairly generic, namespaces allow you to develop and deploy a different instance of the same templated environment for each of your users. This consistency can make management and troubleshooting significantly easier.

Understanding the Preconfigured Kubernetes Namespaces

Before we take a look at how to create your own namespaces, let’s discuss what Kubernetes sets up automatically. By default, three namespaces are available on new clusters:

  • default: Adding an object to a cluster without providing a namespace will place it within the default namespace. This namespace acts as the main target for new user-added resources until alternative namespaces are established. It cannot be deleted.
  • kube-public: The kube-public namespace is intended to be globally readable to all users with or without authentication. This is useful for exposing any cluster information necessary to bootstrap components. It is primarily managed by Kubernetes itself.
  • kube-system: The kube-system namespace is used for Kubernetes components managed by Kubernetes. As a general rule, avoid adding normal workloads to this namespace. It is intended to be managed directly by the system and as such, it has fairly permissive policies.

While these namespaces effectively segregate user workloads the system-managed workloads, they do not impose any additional structure to help categorize and manage applications. Thankfully, creating and using additional namespaces is very straightforward.

Working with Namespaces

Managing namespaces and the resources they contain is fairly straightforward with kubectl. In this section we will demonstrate some of the most common namespace operations so you can start effectively segmenting your resources.

Viewing Existing Namespaces

To display all namespaces available on a cluster, use use the kubectl get namespaces command:

kubectl get namespaces
NAME            STATUS    AGE
default         Active    41d
kube-public     Active    41d
kube-system     Active    41d

The command will show all available namespaces, whether they are currently active, and the resource’s age.

To get more information about a specific namespace, use the kubectl describe command:

kubectl describe namespace default
Name:         default
Labels:       field.cattle.io/projectId=p-cmn9g
Annotations:  cattle.io/status={"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2022-11-17T23:17:48Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpda...
              field.cattle.io/projectId=c-7tf7d:p-cmn9g
              lifecycle.cattle.io/create.namespace-auth=true
Status:       Active

No resource quota.

No resource limits.

This command can be used to display the labels and annotations associated with the namespace, as well as any quotas or resource limits that have been applied.

Creating a Namespace

To create a new namespace from the command line, use the kubectl create namespace command. Include the name of the new namespace as the argument for the command:

kubectl create namespace demo-namespace
namespace "demo-namespace" created

You can also create namespaces by applying a manifest from a file. For instance, here is a file that defines the same namespace that we created above:

# demo-namespace.yml
apiVersion: v1
kind: Namespace
metadata:
  name: demo-namespace

Assuming the spec above is saved to a file called demo-namespace.yml, you can apply it by typing:

kubectl apply -f demo-namespace.yml

Regardless of how we created the namespace, if we check our available namespaces again, the new namespace should be listed (we use ns, a shorthand for namespaces, the second time around):

kubectl get ns
NAME             STATUS    AGE
default          Active    41d
demo-namespace   Active    2m
kube-public      Active    41d
kube-system      Active    41d

Our namespace is available and ready to use.

Filtering and Performing Actions by Namespace

If we deploy a workload object to the cluster without specifying a namespace, it will be added to the default namespace:

kubectl create deployment --image nginx demo-nginx
deployment.extensions "demo-nginx" created

We can verify the deployment was created in the default namespace with kubectl describe:

kubectl describe deployment demo-nginx | grep Namespace
Namespace:              default

If we try to create a deployment with the same name again, we will get an error because of the namespace collision:

kubectl create deployment --image nginx demo-nginx
Error from server (AlreadyExists): deployments.extensions "demo-nginx" already exists

To apply an action to a different namespace, we must include the --namespace= option in the command. Let’s create a deployment with the same name in the demo-namespace namespace:

kubectl create deployment --image nginx demo-nginx --namespace=demo-namespace
deployment.extensions "demo-nginx" created

This newest deployment was successful even though we’re still using the same deployment name. The namespace provided a different scope for the resource name, avoiding the naming collision we experienced earlier.

To see details about the new deployment, we need to specify the namespace with the --namespace= option again:

kubectl describe deployment demo-nginx --namespace=demo-namespace | grep Namespace
Namespace:              demo-namespace

This confirms that we have created another deployment called demo-nginx within our demo-namespace namespace.

Selecting Namespace by Setting the Context

If you want to avoid providing the same namespace for each of your commands, you can change the default namespace that commands will apply to by configuring your kubectl context. This will modify the namespace that actions will apply to when that context is active.

To list your context configuration details, type:

kubectl config get-contexts
CURRENT   NAME      CLUSTER   AUTHINFO   NAMESPACE
*         Default   Default   Default

The above indicates that we have a single context called Default that is being used. No namespace is specified by the context, so the default namespace applies.

To change the namespace used by that context to our demo-context, we can type:

kubectl config set-context $(kubectl config current-context) --namespace=demo-namespace
Context "Default" modified.

We can verify that the demo-namespace is currently selected by viewing the context configuration again:

kubectl config get-contexts
CURRENT   NAME      CLUSTER   AUTHINFO   NAMESPACE
*         Default   Default   Default    demo-namespace

Validate that our kubectl describe command now uses demo-namespace by default by asking for our demo-nginx deployment without specifying a namespace:

kubectl describe deployment demo-nginx | grep Namespace
Namespace:              demo-namespace

Deleting a Namespace and Cleaning Up

If you no longer require a namespace, you can delete it.

Deleting a namespace is very powerful because it not only removes the namespaces, but it also cleans up any resources deployed within it. This can be very convenient, but also incredibly dangerous if you are not careful.

It is always a good idea to list the resources associated with a namespace before deleting to verify the objects that will be removed:

kubectl get all --namespace=demo-namespace
NAME                              READY     STATUS    RESTARTS   AGE
pod/demo-nginx-676fc7d85d-gkdz2   1/1       Running   0          56m

NAME                         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo-nginx   1         1         1            1           56m

NAME                                    DESIRED   CURRENT   READY     AGE
replicaset.apps/demo-nginx-676fc7d85d   1         1         1         56m

Once we are comfortable with the scope of the action, we can delete the demo-namespace namespace and all of the resources within it by typing:

kubectl delete namespace demo-namespace

The namespace and its resources will be removed from the cluster:

kubectl get namespaces
NAME            STATUS    AGE
default         Active    41d
kube-public     Active    41d
kube-system     Active    41d

If you previously changed the selected namespace in your kubectl context, you can clear the namespace selection by typing:

kubectl config set-context $(kubectl config current-context) --namespace=
Context "Default" modified.

While cleaning up demo resources, remember to remove the original demo-nginx deployment we initially provisioned to the default namespace:

kubectl delete deployment demo-nginx

Your cluster should now be in the state you began with.

Extending Namespaces with Rancher Projects

If you are using Rancher to manage your Kubernetes clusters, you have access to the extended functionality provided by the projects feature. Rancher projects are an additional organizational layer used to bundle multiple namespaces together.

Rancher projects overlay a control structure on top of namespaces that allow you to group namespaces into logical units and apply policy to them. Projects mirror namespaces in most ways, but act as a container for namespaces instead of for individual workload resources. Each namespace in Rancher exists in exactly one project and namespaces inherit all of the policies applied to the project.

By default, Rancher clusters define two projects:

  • Default: This project contains the default namespace.
  • System: This project contains all of the other preconfigured namespaces, including kube-public, kube-system, and any namespaces provisioned by the system.

You can see the projects available within your cluster by visiting the Projects/Namespaces tab after selecting your cluster:

Fig. 1: Rancher projects/namespaces view

Fig. 1: Rancher projects/namespaces view

From here, you can add projects by clicking on the Create Project button. When creating a project, you can configure the project members and their access rights and can configure security policies and resource quotas.

You can add a namespace to an existing project by clicking the project’s Create Namespace button. To move a namespace to a different project, select the namespace and then click the Move button. Moving a namespace to a new project switches immediately modifies the permissions and policies applied to the namespace.

Rather than introducing new organizational models, Rancher projects simply apply the same abstractions to namespaces that namespaces apply to workload objects. They fill in some usability gaps if you appreciate namespaces functionality but need an additional layer of control.

Conclusion

In this article, we introduced the concept of Kubernetes namespaces and how they can help organize cluster resources. We discussed how namespaces segment and scope resource names within a cluster and how policies applied at the namespace level can influence user permissions and resource allotment.

Afterwards, we covered some common patterns that teams employ to segment their clusters into logical pieces and we described Kubernetes’ preconfigured namespaces and their purpose. Then we took a look at how to create and work with namespaces within a cluster. We ended by taking a look at Rancher projects and how they extend the namespaces concept by grouping namespaces themselves.

Namespaces are an incredibly straightforward concept that help teams organize cluster resources and compartmentalize complexity. Taking a few minutes to get familiar with their benefits and characteristics can help you configure your clusters effectively and avoid trouble down the road.

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.
Tags: , Category: Products, Rancher Kubernetes Comments closed

Kubernetes vs Docker: What’s the difference?

火曜日, 9 10月, 2018

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

Docker vs Kubernetes: The Journey from Docker to Kubernetes

The need to deploy applications from one computing environment to another quickly, easily, and reliably has become a critical part of enterprise’s business requirements and DevOps team’s daily workflow.

It’s unsurprising then, that container technologies, which make application deployment and management easier for teams of all sizes, have risen dramatically in recent years. At the same time, however, virtual machines (VM) as computing resources have reached their peak use in virtualized data centers. Since VMs existed long before containers, you may wonder what the need is for containers and why they have become so popular.

The Benefits and Limitations of Virtual Machines

Virtual machines allow you to run a full copy of an operating system on top of virtualized hardware as if it is a separate machine. In cloud computing, physical hardware on a bare metal server are virtualized and shared between virtual machines running on a host machine in a data center by the help of hypervisor (i.e. virtual machine manager).

Even though virtual machines bring us great deal of advantages such as running different operating systems or versions, VMs can consume a lot of system resources and also take longer boot time. On the other hand, containers share the same operating system kernel with collocated containers each one running as isolated processes. Containers are lightweight alternative by taking up less space (MBs) and can be provisioning rapidly (milliseconds) as opposed to VM’s slow boot time (minutes) and more storage space requirements (GBs). This allows containers to operate at an unprecedented scale and maximize the number of applications running on minimum number of servers. Therefore, containerization shined drastically in the recent years because of all these advantages for many software projects of enterprises.

Need for Docker Containers and Container Orchestration Tools

Since its initial release in 2013, Docker has become the most popular container technology worldwide, despite a host of other options, including RKT from CoreOS, LXC, LXD from Canonical, OpenVZ, and Windows Containers.

However, Docker technology alone is not enough to reduce the complexity of managing containerized applications, as software projects get more and more complex and require the use tens of thousands of Docker containers. To address these larger container challenges, substantial number of container orchestration systems, such as Kubernetes and Docker Swarm, have exploded onto the scene shortly after the release of Docker.

There has been some confusion surrounding Docker and Kubernetes for awhile: “what they are?”, “what they are not?”, “where are they used?”, and “why are both needed?”

This post aims to explain the role of each technology and how each technology helps companies ease their software development tasks. By the end of this article, you’ll understand that the choice is not Docker vs Kubernetes, but Kubernetes vs alternative container orchestrators.

Let’s use a made-up company, NetPly (sounds familiar?), as a case study to highlight the issues we are addressing.

NetPly is an online and on-demand entertainment movie streaming company with 30 million members in over 100 countries. NetPly delivers video streams to your favorite devices and provides personalized movie recommendations to their customers based on their previous activities, such as sharing or rating a movie. To run their application globally, at scale, and provide quality of service to their customers, NetPly runs 15,000 production servers worldwide and follow agile methodology to deploy new features and bug fixes to the production environment at a fast clip.

However, NetPly has been struggling with two fundamental issues in their software development lifecycle:

Issue 1- Code that runs perfectly in a development box, sometimes fails on test and/or production environments. Therefore, NetPly would like to keep code and configuration consistent across their development, test, and production environments to reduce the issues arising from application hosting environments.

Issue 2- Viewers experience a lot of lags as well as poor quality and degraded performance for video streams during weekends, nights, and holidays, when incoming requests spike. To resolve this potentially-devastating issue, NetPly would like to use load-balancing and auto scaling techniques and automatically adjust the resource capacity (e.g. increase or decrease number of computing resources) to maintain application availability, provide stable application performance, and optimize operational costs as computing demand increases or decreases. These requests also require NetPly to manage the complexity of computing resources and the connections between the flood of these resources in production.

Docker can be used to resolve Issue 1 by following a container-based approach; in other words, packaging application code along with all of its dependencies, such as libraries, files, and necessary configurations, together in a Docker image.

Docker is an open-source operating system level virtualized containerization platform with a light-weight application engine to run, build and distribute applications in Docker containers that run nearly anywhere. Docker containers, as part of Docker, are portable and light-weight alternative to virtual machines, and eliminate the waste of esources and longer boot times of the virtual-machine approach. Docker containers are created using Docker images, which consist of a prebuilt application stack required to launch the applications inside the container.

With that explanation of a Docker container in mind, let’s go back our successful company that is under duress: NetPly. As more users simultaneously request movies to watch on the site, NetPly needs to scale up more Docker containers at a reasonably fast rate and scale down when the traffic lowers. However, Docker alone is not capable of taking care of this job, and writing simple shell scripts to scale the number of Docker containers up or down by monitoring the network traffic or number of requests that hit to the server would not be a viable and practicable solution.

As the number of containers increases to tens of hundreds to thousands, and the NetPly IT team starts managing fleets of containers across multiple heterogeneous host machines, it becomes a nightmare to execute Docker commands like “docker run”, “docker kill”, and “docker network” manually.

Right at the point where the team starts launching containers, wiring them together, ensuring high availability even when a host goes down, and distributing the incoming traffic to the appropriate containers, the team wishes they had something that handled all these manual tasks with no or minimal intervention. Exit human, enter program.

To sum up: Docker by itself is not enough to handle these resources demands at scale. Simple shell commands alone are not sufficient to handle tasks for a tremendous number of containers on a cluster of bare metal or virtual servers. Therefore, another solution is needed to handle all these hurdles for the NetPly team.

This is where the magic starts with Kubernetes. Kubernetes is as container orchestration engine (COE), originally developed by Google and used to resolve NetPly’s Issue 2. Kubernetes allows you to handle fleets of containers. Kubernetes automatically manages the deployment, scaling and networking of containers, as well as container failovers by launching a new one with ease.

The following are some of the fundamental features of Kubernetes.

  • Load balancing

  • Configuration management

  • Automatic IP assignment

  • Container scheduling

  • Health checks and self healing

  • Storage management

  • Auto rollback and rollout

  • Auto scaling

Container Orchestration Alternatives

Although Kubernetes seems to solve the challenges our NetPly team faces, there are a good deal of container management tool alternatives for Kubernetes out there.

Docker Swarm, Marathon on Apache Mesos, and Nomad are all container orchestration engines that can also be used for managing your fleet of containers.

Why choose anything other than Kubernetes? Although Kubernetes has a lot of great qualities, it has challenges too. The most arresting issues people face with Kubernetes are:

  1. the steep learning curve to its commands;

  2. setting Kubernetes up for different operating systems.

As opposed to Kubernetes, Docker Swarm uses the Docker CLI to manage all container services. Docker Swarm is easy to set up, has less commands to learn to get started rapidly, and is cheaper to train employees. A drawback of Docker Swarm bounds you to the limitations of the Docker API.

Another option is the Marathon framework on Apache Mesos. It’s extremely fault-tolerant and scalable for thousands of servers. However, it may be too complicated to set up and manage small clusters with Marathon, making it impractical for many teams.

Each container management tool comes with its own set of advantages and disadvantages. However, Kubernetes with its heritage based in Google’s Borg system, has been greatly adopted and supported by the large community as well as industry for many years and become the most popular container management solution among other players. With the power of both Docker and Kubernetes, it seems like journey of the power and popularity of these technologies will continue to rise and being adopted by even larger communities.

In our next article in this series, we will compare in more depth Kubernetes and Docker Swarm.

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

SUSE in the News 日本版

火曜日, 28 8月, 2018

最近のSUSE関連記事をご紹介します。随時更新していきますのでブックマークをお願いします。

尚プレス発表記事についてはSUSEサイトの「プレスリリース」をご参照ください。

 

年次イベント「SUSECON 2020」開催、開発者向けのデリバリ基盤、AIソリューション計画など発表

独立企業をアピールするSUSE新CEO、マイクロソフトとの提携も強化

2020年5月25日 Ascii.jp x TECH

 

Closing the Leap Gap ―SUSE,コミュニティ版とエンタープライズ版の統合を進める方針を表明

2020年4月10日 Gihyo.jp

 

SUSEのディドナートCEOが語ったオープンソースへの思い

2020年3月26日 ZDNET

ミッションクリティカルなSAPシステムを支える SUSEのオープンソースソリューション戦略

2019年9月30日 EnterpriseZine (JSUG NETより)

独SUSEがコンテナ、マルチクラウド戦略強化を発表した「SUSECON 2019」

2019年4月9日 マイナビニュース

年15%の成長からさらに加速するための戦略、コンテナとクラウド、SAP HANA、企業買収など ~ 再び独立企業となったSUSEは「エンタープライズOSS企業」目指す

2019年4月9日 ASCII.jp

富士通はなぜSUSEを重視するのか–Red Hatとの違い、オープンソースなど

2019年4月5日 ZDNET Japan

独立企業として再スタートを切ったSUSE–CEOに聞く成長戦略

2019年4月4日 ZDNET Japan

OSS最新動向:「ミッションクリティカル化」は進展するか? ドイツが鍵を握るワケ

2019年4月3日Business+IT

SIerへのサポートを手厚く – コンテナやSDSのOSS技術支援を軸に – SUSE

2019年1月17日 BCN

16年ぶりに独立するSUSE、「オープンなOSS会社」として飛躍の好機

2018年9月3日 日経 xTECH

急増する企業データを高効率で保管する、SUSEのSDS「SUSE Enterprise Storage 5」

2018年8月28日 BCN

「企業ITのクラウドネイティブ化」と「CaaS」「PaaS」の関係

2018年7月27日 TechTarget (会員専用サイト)

パートナーとの協業でオープンソースのビジネスを推進 「SUSE Expert Days 2018」を開催――SUSE

2018年6月7日 週刊BCN

「SAP HANA向けLinuxのシェア90%」から新たな未来を開くSUSEの戦略

2018年5月31日 ZDNET Japan

Cloud Foundry on Kubernetes、その理由とは

2018年5月30日 @IT  (SUSE Product & Technology VP, Gerald Pfeifer へのメール取材)

 

 

実用レベルに達したSUSEのライブパッチ機能、あるとないでは大違いの理由

2018年3月27日 TechTarget (ComputerWeekly)

SUSEが目指す「オープンなオープンソースカンパニー」

2017年12月8日 ZDNET Japan

 

 

10大技術トレンド:なぜオープンソースが主役なのか – パート1

日曜日, 29 7月, 2018

オープンソースソフトウェアのイメージが変わってきました。主流のソフトウェアの代用として、マニアックで、やや扱いにくいものと見なされていた時代は終わり、今ではトレンディで、ファッショナブルで、超クールな存在になりました。

しかし、それでもまだマニアックな側面は有り、それこそが技術革新の拠り所でもあります。とは言え、中小企業から大手ハイテク企業、グローバル企業までが、戦略の中核にオープンソースを位置づけるまでに成長し成熟しました。

もはや戻ることはありません。オープンソースは、プロプライエタリのアプローチやソリューションに対して、実用的で強力な代替手段を提供するまでになりました。オープンソースプロジェクトは、私たちを取り巻く世界を再形成している戦略的な技術トレンドの上位すべてにおいて、主導的な役割を果たしています。たとえ近い将来に使用する予定がなくても、これらのトレンドを理解してフォローすることが極めて重要です。

ここで理解を深めていただくために、10大技術トレンドと関連するオープンソースプロジェクトを、2回にわたりブログシリーズで解説します。それでは、今すぐフォローしなければならない上位の5大技術トレンドから紹介しましょう。

1.人工知能(AI)/ 機械学習(ML)

ガートナーによると、AIから派生したビジネス価値は、今年は1.2兆ドルとなる見込みです。カスタマーサポートやチャットボットから金融、調査、機械学習、データセンター運用、セキュリティの自動化にいたるまで、ありとあらゆるところに影響を及ぼすことが予測されています。

AIとMLには、巨大なデータセットと高速データ分析を可能にする計算能力が必要です。多くの組織ではこれを実現するために高性能コンピューティング(HPC)ソリューションを使用していますが、HPCの世界での王者はLinuxです。

AI、ML、ディープラーニング、予測分析、ニューラルネットワークは、技術研究にとって今最も注目されている分野ですので、オープンソースプロジェクトが重要な役割を果たしているのは不思議ではありません。TensorFlowCaffeH2OMahout、そしてMLlibは主要な例ですが、他にも多くのオプションがあります。Datamation.comKDnuggets.comのサイトで、これらのリストをご覧ください。

 

2.ロボット工学

ロボット工学は、私たちの職場環境と文化に深い影響を与えていきます。

私たちはすでに、製造、農業、倉庫、医療手術、自動化にロボットを使用しています。近い将来、人間と一緒に働き効率と生産性を向上させる、コラボレーションロボット、またはコボットと呼ばれるロボットを数多く目にするようになることが予測されています。

ロボット工学の世界は、ハードウェア、ソフトウェア、ロボットシミュレータに焦点を当てたオープンソースプロジェクトで溢れています。その一例は、Linux上に構築されたオープンソースプラットフォームであるROS(Robotics Operating System)です。ROSでは複雑なロボットの設計と制御を容易にするツールとライブラリを提供しています。

さらに多くのプロジェクトが利用可能です。オープンソースロボティクスのWikipediaのページにこの分野のオープンソースオプションが詳しく書かれています。

 

3. IoT(モノのインターネット)とエッジコンピューティング

ガートナーの予測によれば、2020年までに260億のIoTデバイスが出現し、IDCは、2021年にはIoTの支出が約1兆4000億ドルに達すると予測しています。それは信じられないほど著しい成長スピートです。

IoTは、より賢く、より接続された世界に向かっています。すでにIoTソリューションは、製造、貨物輸送、農業、資産管理、スマートインフラストラクチャ(家庭、ビル、都市)、スマートユーティリティ(電気、ガス、水道)、さらにコンテキストマーケティングでも導入されています。IoT適用分野のリストは、世の中に出回っている奇妙なIoT付きグッズを除いたとしても、相当の長さになります。

大部分のIoTデバイスは、組込みLinuxを使用しています。組込みLinuxは、シンプル、軽量で、必要リソースが少なく、低コストのリアルタイムOSを求めている場合に、極めて理にかなったOSと言えます。

IoTネットワークのエッジに処理と計算機能を移動する場合には、分散型クラウドモデルとしてOpenStackが広く使用されています。

その他のオープンソースのIoTオプションをお探しの方は、詳細について、Linux.comまたはPostscapes.comをご覧ください。

 

4. 自動運転車/ドローン

今、自動運転車は大変ホットな話題になっています。2020年には商用化されそうですが、おそらく世の中に大きな影響を及ぼすのは2025年以降でしょう。

一方、ドローンについては、ガートナーによると昨年出荷された個人用または商用のドローンが約300万台に達したとのことです。

これらの技術が本当に離陸すると(ダジャレですが)、自動運転車とドローンは近代史における最も重要な進歩になるでしょう。

自動運転車は、堅牢で安全なOSで動作する必要があります。もしハッカーがこのシステムに侵入したらと想像するだけでも恐ろしいことです。テレメトリー、マッピング、カメラ、センサー、距離測定、マシンビジョン、機械学習なども必須技術になります。

自動車メーカーは、オープンソースにおける協業と連携のアプローチこそが、これらの多くの課題の解決を容易にすることを認識し始めています。Automotive Grade Linux(AGL)およびOpenCVの人気が高まっており、あのテスラでさえ最近オープンソースのLinuxコードを公開することを決定しました。

ドローンに関しては、非常に多くのオープンソースプロジェクトの選択肢があります。このCaldat.comリストでオプションをご覧ください。

 

5. ビッグデータと分析

ビッグデータ分析は、無視することのできないもう1つの重大な技術動向です。IDCは、この市場が銀行および製造業への投資を中心にして、2020年まで2,100億ドルに達すると予測しています。

ビッグデータとデータ分析は、IoT、AI、ML、クラウドコンピューティングなど、他の多くのトレンド技術分野における共通のテーマです。

ここでもオープンソースは、この分野で必要とされる、最先端のRDBMS(リレーショナルデータベース管理システム)、分析エンジン、分散コンピューティングソリューションを数多く提供しています。たとえば、EDB PostgreSQLMariaDBMongoDBApache SparkApache CassandraApache KafkaHadoopなどがあります。

さらに、クラウドに導入されたすべてのOpenStackの4分の1が、ビッグデータやデータマイニングのワークロード実行に使用されています[1]

そしてもう1つ、紹介する価値のあるオープンソースプロジェクトがあります。今、指数関数的に増加するデータを管理することが大きな課題となっています。企業は、ビッグデータと分析への対応で、ストレージの種類、容量、速度を慎重に検討する必要に迫られています。

Cephは、この問題を解決するのに役立つオープンソースのソフトウェア定義型ストレージソリューションです。このSUSEブログを見てその理由を確認してください。

 

近日公開…

パート2ではそこで10大技術トレンドの残りの5つについて紹介し、オープンソースが注目されている理由を解き明かします。

 

 

※このブログは10 Top Tech Trends: Why Open Source is Center Stage – Part 1の抄訳になります。

The Metrics that Matter: Horizontal Pod Autoscaling with Metrics Server

火曜日, 26 6月, 2018

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Sometimes I feel that those of us with a bend toward distributed systems engineering like pain. Building distributed systems is hard. Every organization regardless of industry, is not only looking to solve their business problems, but to do so at potentially massive scale. On top of the challenges that come with scale, they are also concerned with creating new features and avoiding regression. And even if they achieve all of those objectives with excellence, there’s still concerns about information security, regulatory compliance, and building value into all the investment of the business.

If that picture sounds like your team and your system is now in production – congratulations! You’ve survived round 1.

Regardless of your best attempts to build a great system, sometimes life happens. There’s lots of examples of this. A great product, or viral adoption, may bring unprecidented success, and bring with it an end to how you thought your system may handle scale.

Pokémon GO Cloud Datastore Transactions Per Second Expected vs. Actual

Source: Bringing Pokémon GO to life on Google Cloud, pulled 30 May 2018

You know this may happen, and you should be prepared. That’s what this series of posts is about. Over the course of this series we’re going to cover things you should be tracking, why you should track it, and possible mitigations to handle possible root causes.

We’ll walk through each metric, methods for tracking it and things you can do about it. We’ll be using different tools for gathering and analyzing this data. We won’t be diving into too many details, but we’ll have links so you can learn more. Without further ado, let’s get started.

Metrics are for Monitoring, and More

These posts are focused upon monitoring and running Kubernetes clusters. Logs are great, but at scale they are more useful for post-mortem analysis than alerting operators that there’s a growing problem. Metrics Server allows for the monitoring of container CPU and memory usage as well as on the nodes they’re running.

This allows operators to set and monitor KPIs (Key Performance Indicators). These operator-defined levels give operations teams a way to determine when an application or node is unhealthy. This gives them all the data they need to see problems as they manifest.

In addition, Metrics Server allows Kubernetes to enable Horizontal Pod Autoscaling. This capability allows Kubernetes autoscaling to scale pod instance count for a number of API objects based upon metrics reported by the Kubernetes Metrics API, reported by Metrics Server.

If you’re just getting underway with Kubernetes, read the Introduction to Kubernetes Monitoring, which will help you get the most out of the rest of this article.

Setting up Metrics Server in Rancher-Managed Kubernetes Clusters

Metrics Server became the standard for pulling container metrics starting with Kubernetes 1.8 by plugging into the Kubernetes Monitoring Architecture. Prior to this standardization, the default was Heapster, which has been deprecated in favor of Metrics Server.

Today, under normal circumstances, Metrics Server won’t run on a Kubernetes Cluster provisioned by Rancher 2.0.2. This will be fixed in a later version of Rancher 2.0. Check our Github repo for the latest version of Rancher.

In order to make this work, you’ll have to modify the cluster definition via the Rancher Server API. Doing so will allow the Rancher Server to modify the Kubelet and KubeAPI arguments to include the flags required for Metrics Server to function properly.

Instructions for doing this on a Rancher Provisioned cluster, as well as instructions for modifying other hyperkube-based clusters is availabe on github here.

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Automate DNS Configuration with ExternalDNS

月曜日, 18 6月, 2018

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

One of the awesome things about being in the Kubernetes community is the
constant evolution of technologies in the space. There’s so much
purposeful technical innovation that it’s nearly impossible to keep an
eye on every useful project. One such project that recently escaped my
notice is the ExternalDNS subproject. During a recent POC, a member of
the organization to whom we were speaking asked about it. I promised to
give the subproject a go and I was really impressed.

The ExternalDNS subproject

This subproject (the incubator process has been deprecated), sponsored
by sig-network and championed by Tim
Hockin
, is designed to automatically
configure cloud DNS providers. This is important because it further
enables infrastructure automation allowing DNS configuration to be
accomplished directly alongside application deployment.

Unlike a traditional enterprise deployment model where multiple siloed
business units handle different parts of the deployment process,
Kubernetes with ExternalDNS automates this part of the process. This
removes the potentially aggravating process of having a piece of
software ready to go while waiting for another business unit to
hand-configure DNS. The collaboration via automation and shared
responsibility that can happen with this technology prevents manual
configuration errors and enables all parties to more efficiently get
their products to market.

ExternalDNS Configuration and Deployment on AKS

Those of you who know me, know that I spent many years as a software
developer in the .NET space. I have a special place in my heart for the
Microsoft developer community and as such I have spent much of the last
couple of years sharing Kubernetes on Azure via Azure Container Service
and Azure Kubernetes Service with the user groups and meetups in the
Philadelphia region. It just so happens the persons asking me about
ExternalDNS are leveraging Azure as an IaaS offering. So, I decided to
spin up ExternalDNS on an AKS cluster. For step by step instructions and
helper code check out this
repository
.
If you’re using a different provider, you may still find these
instructions useful. Check out the ExternalDNS
repository
for
more information.

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

2017 Container Technology Retrospective – The Year of Kubernetes

水曜日, 27 12月, 2017

It is not an
overstatement to say that, when it comes to container technologies, 2017
was the year of Kubernetes. While Kubernetes has been steadily gaining
momentum ever since it was announced in 2014, it reached escape velocity
in 2017. Just this year, more than 10,000 people participated in our
free online Kubernetes Training
classes.
A few other key
data points:

  1. Our company, Rancher Labs, built a product that supported multiple
    container orchestrators, including Swarm, Mesos, and Kubernetes.
    Responding to overwhelming market and customer demands, we decided
    to build Rancher 2.0 to 100% focus
    on Kubernetes. We are not alone. Even vendors who developed
    competing frameworks, like Docker Inc. and Mesosphere, announced
    support for Kubernetes this year.
  2. It has become significantly easier to install and operate
    Kubernetes. In fact, in most cases, you no longer need to install
    and operate Kubernetes at all. All major cloud providers, including
    Google, Microsoft Azure, AWS, and leading Chinese cloud providers
    such as Huawei, Alibaba, and Tencent, launched Kubernetes as a
    Service. Not only is it easier to set up and use cloud Kubernetes
    services like Google GKE, cloud Kubernetes services are cheaper.
    They often do not charge for resources required to run the
    Kubernetes master. Because it takes at least 3 nodes to run
    Kubernetes API servers and the etcd database, cloud
    Kubernetes-as-a-Service can lead to significant savings. For users
    who still want to stand up Kubernetes in their own data center,
    VMware announced Pivotal Container Service (PKS.) Indeed, with more
    than 40 vendors shipping CNCF-certified Kubernetes distributions,
    standing up and operating Kubernetes is easier than ever.
  3. The most important sign of the growth of Kubernetes is the
    significant number of users who started to run their
    mission-critical production workload on Kubernetes. At Rancher,
    because we supported multiple orchestration engines from day one, we
    have a unique perspective of the growth of Kubernetes relative to
    other technologies. One Fortune 50 Rancher customer, for example,
    runs their applications handling billions of dollars of transactions
    every day on Kubernetes clusters.

A significant trend we observed this year was an increased focus on
security among customers who run Kubernetes in production. Back in 2016,
the most common questions we heard from our customers centered around
CI/CD. That was when Kubernetes was primarily used in development and
testing environments. Nowadays, the most common feature requests from
customers are single sign-on, centralized access control, strong
isolation between applications and services, infrastructure hardening,
and secret and credentials management. We believe, in fact, offering a
layer to define and enforce security policies will be one of the
strongest selling points of Kubernetes. There’s no doubt security will
continue to be one of the hottest areas of development in 2018. With
cloud providers and VMware all supporting Kubernetes services,
Kubernetes has become a new infrastructure standard. This has huge
implications to the IT industry. As we all know, compute workload is
moving to public IaaS clouds, and IaaS is built on virtual machines.
There is no standard virtual machine image format or standard virtual
machine cluster manager. As a result, application built for one cloud
cannot easily be deployed on other clouds. Kubernetes is a game changer.
An application built for Kubernetes can be deployed on any compliant
Kubernetes services, regardless of the underlying infrastructure. Among
Rancher customers, we already see wide-spread adoption of multi-cloud
deployments. With Kubernetes, multi-cloud is easy. DevOps team get the
benefit of increased flexibility, increased reliability, and reduced
cost, without having to complicate their operational practices. I am
really excited about how Kubernetes will continue to grow in 2018. Here
are some specific areas we should pay attention:

  1. Service Mesh gaining mainstream adoption. At the recent KubeCon
    show, the hottest topic was Service Mesh. Linkerd, Envoy, Istio,
    etc. all gained traction in 2017. Even though the adoption of these
    technologies is still at an early stage, the potential is huge.
    People often think of service mesh as a microservices framework. I
    believe, however, service mesh will bring benefits far beyond a
    microservice framework. Service mesh can become a common
    underpinning for all distributed applications. It offers application
    developers a great deal of support in communication, monitoring, and
    management of various components that make up an application. These
    components may or may not be microservices. They don’t even have to
    be built from containers. Even though not many people use service
    mesh today, we believe it will become popular in 2018. We, like most
    people in the container industry, want to play a part. We are busy
    integrating service mesh technologies into Rancher 2.0 now!
  2. From cloud-native to Kubernetes-native. The term “cloud native
    application” has been popular for a few years. It means applications
    developed to run on a cloud like AWS, instead of static environments
    like vSphere or bare metal clusters. Applications developed for
    Kubernetes are by definition cloud-native because Kubernetes is now
    available on all clouds. I believe, however, the world is ready to
    move from cloud-native to, using a term I first heard from Joe Beda,
    “Kubernetes-native“. I know of many organizations developing
    applications specifically to run on Kubernetes. These applications
    don’t just use Kubernetes as a deployment platform. They persist
    data in Kubernetes’s own etcd database. They use Kubernetes custom
    resource definition (CRD) as data access objects. They encode
    business logic in Kubernetes controllers. They use Kubelets to
    manage distributed clusters. They build their own API layer on
    Kubernetes API server. They use `kubectl` as their own CLI.
    Kubernetes-native applications are easy to build, run anywhere, and
    are massively scalable. In 2018, we will surely see more
    Kubernetes-native applications!
  3. Massive number of ready-to-run applications for Kubernetes. Most
    people use Kubernetes today to deploy their own applications. Not
    many organizations ship their application packages as YAML files or
    Helm charts yet. I believe this is about to change. Already most
    modern software (such as AI frameworks like Tensorflow) are
    available as Docker containers. It is easy to deploy these
    containers in Kubernetes clusters. A few weeks ago, Apache Spark
    project added support to use Kubernetes as a scheduler, in addition
    to Mesos and YARN. Kubernetes is now a great big-data platform. We
    believe, from this point onward, all service-side software packages
    will be distributed as containers and will be able to leverage
    Kubernetes as a cluster manager. Watch out for vast growth and
    availability of ready-to-run YAML files or Helm charts in 2018.

Looking back, growth of Kubernetes in 2017 far exceeded what all of us
thought at the end of 2016. While we expected AWS to support Kubernetes,
we did not expect the interest in service mesh and Kubernetes-native
apps to grow so quickly. 2018 could very well bring us many unexpected
technological developments. I can’t wait to find out!

Category: 未分類 Comments closed