When considering an infrastructure solution, think a joint solution from Fujitsu and SUSE

Tuesday, 10 April, 2018

When I first came to SUSE, our relationship with Fujitsu was on a more regional basis. We have a very close relationship with Fujitsu in Central Europe and I like to think it is because our companies hold the same values: both companies are committed to innovating products that are insanely reliable, both companies are committed to being the best possible partner to each other (I see this day-in and day-out), and both companies care about making technology solutions easier to use. When meeting with colleagues from both companies in Central Europe, you would have a hard time knowing which company a person works for because the partnership is so strong.

I think one of the things that stand out to me about Fujitsu products and solutions is reliability. Fujitsu reliability is outstanding and is based on the company’s vast mainframe experience and deep commitment to reliability. You’ll find multiple redundant, hot-plug components that improve performance and increase system availability, multiple processor-level and system-level mechanisms and redundancies used for error detection and recovery, and multiple, electrically isolated partitions in their high-end servers so they can be serviced separately without taking the entire system offline.

So, it was not surprising when SUSE and Fujitsu expanded the partnership outside of Europe and entered into a global agreement in late 2016. Since that announcement, we have engaged in additionally solutions like PRIMEFLEX for OpenStack Cloud and SUSE Business Critical Linux, a solution offered exclusively through Fujitsu.

Alan Clark interviewed Yoshiya Eto of Fujitsu, covering a variety of topics. The last video in this series talks about the future of our relationship. I urge you to view to learn more about the directions we will take together as customers look for reliable and easy to use infrastructure solutions.

A sincere “thank you” goes to Yoshiya Eto for his willingness to participate in the interviews and to the team at the Linux Development Division, Platform Software business unit at Fujitsu Limited, for making this happen. ありがとう

Meet SUSE at AWS Summit San Francisco!

Monday, 2 April, 2018

This Wednesday, April 4th 2018 at the Moscone Center in San Francisco we invite you to network with our team of cloud solution architects at AWS Summit San Francisco at Booth 1122. We look forward to engage with you there and wanted to share a peak at some of the great topics we expect will come up in case you can’t make it:

Cloud-Native Linux Tools and Features

Did you know that SUSE Linux Enterprise Server has a number of tools to enhance your experience building and delivering on AWS? New additions such as SUSE ‘IPA’ facilitate and simplify automated testing in the public cloud, with the ability to also integrate it with openQA and even AWS lambda.

Transitioning to SAP HANA and S/4HANA on AWS

Moving to SAP HANA on AWS helps you quickly modernize business operations by providing you with improved efficiency and real time analytics – but are you unlocking AWS innovation beyond the infrastructure?

Some of the greatest customer stories we’ve heard this year are about fast-moving enterprise companies migrating mission-critical apps to cloud combined with the guidance of trusted experts that understand the depth of the AWS service portfolio. Luckily for you, right next to us you’ll find Lemongrass, an SAP on AWS APN Competency and key SUSE partner, at booth 1118. Learn from our wide variety of shared customer experiences and talk to us about your current SAP modernization efforts.

Optimizing IT Operations on AWS

Rapid technology advancements such as those announced every year at AWS events create endless new opportunities – but also bring many new management challenges. What can you do to reduce the complexity?

Ask us about SUSE Manager 3.0, and how you can quickly deploy a solution that will help you optimize operations on public cloud. Many of our customers and partners are using this solution to maintain secure, highly-available environments that are cross-platform and cross-cloud.

Containerized Applications on AWS

SUSE has decades of experience helping customers move to the next phase of IT. In the world of containers for example we’ve been enabling you to deploy containerized workloads on AWS leveraging AWS EC2 Container Service since 2015. Now, you’ll find new SUSE solutions like SUSE CaaS Platform leveraging Kubernetes technology to help you automate deployment and scaling for your container-based applications on AWS.

If you want to schedule a meeting in advance – reach out to us today!

 

A New Face at SUSE

Wednesday, 28 March, 2018

As a relative newcomer to SUSE, it’s a pleasure to be writing my first blog to introduce myself, and to look at the recent OpenStack Queens release.

Although I’ve only been with SUSE for just over a month now, I’ve been working in the as-a-Service and Cloud world for over twenty years (yes, I don’t look anywhere near old enough to have been doing that, thank you for saying!). My most recent roles were at Rackspace, where I spent 11 years, and a shorter stint more recently at CenturyLink, which gave me more insight into the world of telcos and software-defined networking. If you’ve peeked at my Twitter feed, you’ll have noticed a few tweets about OpenStack appearing recently in between posts about beer, Movember (a charity that I have supported for more than 12 years now), cloud computing and cyber security. You can expect to see plenty more tweets about all of these topics (and more) in the coming months.

I received my introduction to OpenStack at Rackspace when they announced the project with NASA in 2010. While the concept of open source wasn’t new by any stretch of the imagination, the release was still met with a degree of scepticism from some quarters. Who would have known that this amalgamation of the code behind NASA’s cloud server product and Rackspace’s cloud storage offering would become one of the most successful open source communities? I loved the concept of an open source cloud operating system that had developers and companies all over the world collaborating on and improving the code together, and am very excited to be immersing myself back into the world of OpenStack, particularly working for a well-established and fun-loving company like SUSE.

OpenStack Queens

The Queens OpenStack release is named after the Queens Park suburb in Sydney Australia. Why Queen Park, I hear you question. Allow me to explain – the naming of a release is always taken from a geographic location corresponding to the location of the summit associated with a release. There are quite a complex set of rules governing this, but you can read more about it on the OpenStack site. Queens is the 17th iteration of the OpenStack code, and brings lots of new functionality that will help businesses looking to invest in technologies like containers, network function virtualization (more catchily known as NFV), edge computing and machine learning.

SUSE OpenStack Cloud v8

I’m currently working with the rest of the team at SUSE on version 8 of the SUSE OpenStack Cloud, which we will be announcing before the Vancouver OpenStack Summit in May. Watch this space and my Twitter feed for more on that coming soon. I’ll also be at the OpenStack Summit in Vancouver, so if you’re there, please come along to the SUSE booth and introduce yourself. I’m looking forward to meeting lots of interesting people there, and renewing old acquaintances from the OpenStack world.

SUSE Expert Days

Before then, we have the SUSE Expert Days taking place around the world – the London date is the 24th of April. These are a great opportunity to meet some of the local SUSE team as well as your peers. There’s also opportunities to watch demonstrations and presentations, to take part in technical discussions and to meet others working in the open source community in your region. I’ll be attending the London event, and hope to see some of you there.

Kubernetes System Security – Protecting Against Kubelet Exploits

Thursday, 22 March, 2018

By Andson Tung

As critical as it is to protect application containers deployed by Kubernetes, it is just as critical to protect the Kubernetes system containers from attacks or from being used in an attack. In this post I’ll focus on one important Kubernetes security area – protecting the Kubelet, which manages the pods on a worker node. The recent Kubernetes exploit at Tesla where crypto mining software was installed on servers by exploiting a open Kubernetes console makes it critical to examine potential attack surfaces for all Kubernetes system containers.

The kubelet hack has been documented and discussed for some time. It is surprising that such access can be allowed for any unprivleged pod to execute commands on the kubelet and control the entire cluster.

As a general rule of thumb, all access to Kubernetes system containers such as the API server and kubelet should be restricted with process role based access controls (RBACs). This applies to external access as well as internal access. But it’s not always possible to catch every potential attack vector when using new technologies. While the Tesla exploit came from an external source, the Kubelet exploit can occur over internal communications.

For every container deployment, it is useful to have network visibility and protection to catch possible misconfigurations and exploits. For business critical applications, it’s a must-have. Below, I’ll show how NeuVector can be configured with simple rules to protect the kubelet.

First, create a Group ‘allhost’ in NeuVector that includes all hosts in the Kubernetes cluster, by defining the subnet for the cluster. Then create another Group ‘allnamespace’ that includes all namespaces. All these actions can be done through the console, API, or CLI.

I can then create a new Rule that restricts access from any namespace to any host, on the kubelet port 10250. This will make sure no compromised application pod can exploit the kubelet.

Testing this, try connecting to the kubelet from within any application pod, and running a command. If successful, this command could be any damaging exploit.

$kubectl exec -ti httpserver-pod-00-787d79b8f4-b6gcg -- curl -k -XPOST "https://10.1.5.83:10250/run/neuvector/httpserver-pod-00-787d79b8f4-xxxxx/httpserver-pod-00" -d "cmd=ls -la /"

You’ll see a response showing that NeuVector has blocked this connection.

$curl: (7) Failed to connect to 10.1.5.83 port 10250: Operation timed out

You can see in the console how NeuVector is able to detect and block such activity.

If it is preferred not to block such connections, the containers could be in Monitor mode instead of Protect, which will allow the connection through but generate a security alert.

The picture above shows the violations, one which was blocked, and one which was allowed but generated an alert.

Although this is a simple example of a default configuration which can be easily protected by proper access controls, it demonstrates the simple point that with any new technologies and deployments there can be new attack surfaces exposed by misconfiguration or compromise. Having a Kubernetes security solution like NeuVector helps to detect and prevent these occurrences.

Top 5 Reasons to Check out SUSE at IBM Think

Friday, 16 March, 2018

PARTY with SUSE at IBM Think

SUSE, the Open Open Source Company is pleased to be sponsoring IBM Think in the fabulous city of Las Vegas. Hopefully we’ll get to meet many of you and have a chance to talk about your priorities: how you can adapt to win in your respective markets, down with downtime and ways to drive your digital transformation.

If the chance to be among the first to learn about what’s going to drive your success in the next 5 to 10 years wasn’t enough, here are 5 more reasons why you would want to come check out SUSE at IBM Think, March 19-22nd.

  1. Hot Session – Spotlight on Software Defined Storage

On Thursday March 22nd, Mike Friesenegger, IBM Technologist at SUSE, will host his a session around Make Storage Smarter: How Software Defined Storage differs from Legacy Solutions and When to Use it. In his session, which begins at 9:30AM, Mike will be joined by Camilla Sharpe of IBM TSS and together, they will detail how businesses who are feeling the pressure to manage increasing amounts of storage can jump start their journey with SUSE Enterprise Storage.  Don’t miss the opportunity to learn about the newest innovations in storage and how SUSE and IBM TSS can help you tame your data explosion. 9:30 AM – 10:10 AM | Session ID: 9325A – Mandalay Bay South, Level 2 | Surf A

 

  1. Demos of the latest and greatest solutions for IBM Power, Z and LinuxOne!

During the week-long event, attendees will have a chance to get experience with the latest solutions from SUSE, made possible by the deep collaboration between IBM and SUSE. Don’t miss the chance to take an up close look at solutions like SUSE Linux Enterprise Server for z Systems and LinuxONE; Container Support, running Docker on Z and LinuxONE; OpenStack Cloud, Software Defined Storage and SUSE Linux Enterprise Server for SAP.

 

  1. Meet the experts

SUSE experts in every area of the datacenter and IBM Infrastructure will be on-hand at IBM Think to answer questions and share insights about game-changing infrastructure solutions that will drive your digital transformation. Stop by our booth (221) for a chance to ask your toughest questions. Ask us about Live Patching the Linux Kernel, ask us about Zero Downtime for SAP Applications, ask us about end-to-end encryption on Z!

 

  1. Giveaways and Cool Gear

We love cool giveaways. We are giving away Bluetooth headsets, radio controlled cars and trucks, and giant chameleon plushies – the size of a large dog I’d say – I know it’s a little bigger than the German Shepard/Lab mix I got at home. 😛 So stop by and claim a cool prize!

  1. Got SAP? SUSE and IBM Got You Covered

A lot of you guys are facing the daunting 2025 deadline for moving all your SAP applications to S/4 HANA. The good news is, you are not alone, IBM and SUSE have already helped over 1000 customers get started on that journey. SUSE has long been the Gold Standard for SAP on IBM Power, and in just the last 6 months, we’ve raised the bar again. How? Check out these blog posts:

  1. 1st on Power and Still the Best
  2. Thousands of Customers can’t be Wrong
  3. Live Patching – Woohoo!
  4. How do you like Zero Downtime
  5. SUSE Manager on Power
  6. Only Linux Solution to be HA Certified by SAP
  7. Superior, I mean Far Superior Support

 

In summary – come see us at IBM Think! 😀

SUSE shows highest functional score of Ceph distributions in Gartner’s Critical Capabilities for Object Storage Research*

Thursday, 15 February, 2018

* Critical Capabilities for Object Storage – Published: 25 January 2018 ID: G00304492

It is not easy to get onto a Gartner research note: you have to have the right – and robust – product capabilities, be able to demonstrate them in rigorous questionnaires, and you have to wait for an outcome you can’t control. While you wait for the verdict, you know the analysts don’t take your word for it and that they will have been talking to your actual customers: ‘‘OK, so you’ve deployed SUSE Enterprise Storage – what’s your experience been like, what should other enterprises look out for?”

It’s this combination which makes Gartner so compelling for IT leaders – the mix of detailed capability comparisons based on verifiable data, skilled practitioner opinion, and real customer feedback. As you can imagine, we at SUSE are pleased to have made it on to the Gartner Critical Capabilities for Object Storage report for the very first time. This is a milestone for us, clearly demonstrating the distance we’ve travelled from ‘outsider,’ to a place where we are in the same competitive set as Dell EMC, NetApp, IBM and Huawei.

There are three clear takeaways:

Key takeaway #1 – who’s got the best Ceph?

In Gartner’s assessment of object storage capabilities against specific use cases, SUSE Enterprise Storage ranked ahead of Red Hat Ceph for analytics, archiving, backup, content distribution, and cloud storage. Are you wondering if Red Hat ranked ahead of SUSE on other use cases in the report? Nope: SUSE ranked ahead in every use case covered by Gartner. SUSE is positioned 9th out of 13 overall, against a backdrop where Gartner says there’s a range of pricing, ‘with open source and new entrants pricing at the low end’ and ‘the established vendors . . . . . at the high end’. Given that SUSE Enterprise Storage is about value we’ve got every reason to be patting ourselves on the back.

Key takeaway #2 – ‘the driving force behind the adoption of object storage is cost reduction’

This is the first sentence in the report: and for storage buyers and businesses trying to deal with huge volumes of data without breaking the bank, it’s a critical one. This is where SUSE has staked out its ground: we believe we offer the best value in market available today, and that is measureable in dollars per terabyte.

Key takeaway #3. SUSE Enterprise Storage isn’t limited to Object.

In this note, Gartner has assessed SUSE Enterprise Storage on the basis of our object storage capabilities alone. That’s one third of the picture: SUSE Enterprise Storage also supports file and block, making SUSE Enterprise Storage one of the few truly unified solution.

So, in summary this report highlights that SUSE Enterprise Storage scores very well across all use cases and with a clear strength on value and return on investment. Our flexible business practices and simplistic pricing model for SUSE Enterprise Storage cement this advantage.

 

It is also testament to the Ceph project and community efforts that two Ceph distributions are listed amongst the 13 products.

 

On a more technical/speed and feeds note, this report is based on the capabilities of SUSE Enterprise Storage 4. SUSE Enterprise Storage 5 shipped on October 31st 2017 and now delivers:

 

    • The ability to service environments that require higher levels of performance through the enablement of the new “BlueStore” native object storage backend for Ceph. SUSE Enterprise Storage 5 offers up to double the write performance of previous releases, coupled with significant reductions in I/O latency.
    • The ability to free up capacity and reduce data footprint via BlueStore enabled Data Compression.
    • Increased disk space efficiency of a fault tolerant solution through enablement of Erasure coding for Replicated block devices and CephFS data.
    • Lowered operational cost with an expanded advanced graphical user interface for simplified management and improved cost efficiency, using the next generation openATTIC open source storage management system.
    • Simplified cluster management and orchestration through enhanced Salt integration.

 

You can read the full Gartner research note, with our compliments, here.

 

Learn more about SUSE Enterprise Storage

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

 

 

How to Enforce DNS-based Egress Container Security Policies in Kubernetes and Openshift

Wednesday, 14 February, 2018

By Gary Duan

While more and more applications are moving to a microservices and container-based architecture, there are legacy applications that cannot be containerized. Access to these applications need to be secured with egress container security policies when containers are deployed with Kubernetes or Red Hat OpenShift. These legacy applications include database servers and applications developed with .NET frameworks. The cost and risk to migrate these applications to a microservice architecture is so high that many enterprises have a mixed environment where new containerized applications and legacy applications are running in parallel.

Application segmentation is a technique to apply access control policies between different services to reduce their attack surface. It is a well-accepted practice for applications running in virtualized environments. In a mixed environment, containerized applications need access to the legacy servers. DevOps and security teams want to define access policies to limit the exposure of legacy applications to a group of containers. In Kubernetes, this can be achieved by Egress Rules of the Network Policy feature.

This article discusses several implementations of egress policy in Kubernetes and Red Hat OpenShift and introduces the NeuVector approach for overcoming limitations with basic Network Policy configurations.

Egress Container Security with Network Plugins

In Kubernetes 1.8+, an Egress policy can be defined as something like this,

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: app
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.10.0.0/16
    ports:
    - protocol: TCP 
      port: 5432

This example defines a network policy that allow containers with the label ‘role=app’ to access TCP port 5432 of all servers in subnet 10.10.0.0/16. All other egress connections will be denied.

This achieves what we want to some extent, but not all network plugin support this feature. As of February 2018, according to this link, Calico and Weave Net are the only two network plugins that support this feature commercially.

However, using a subnet address and port to define external services is not ideal. If these services are PostgreSQL database clusters, you really want to be able to specify only those database instances using their DNS names, instead of exposing a range of IP addresses. Using port definitions is also obsolete in a dynamic cloud-native world. A better approach is to use application protocols, such as ‘postgresql,’ so only network connections using the proper PostgreSQL protocols can access these servers.

Egress Container Security with Istio

Istio is an open-source project that creates a service mesh among microservices and layers onto Kubernetes or OpenShift deployments. It does this by “deploying a sidecar proxy throughout your environment”. In the online documents, they give this example for Egress policy.

apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
  name: googleapis
  namespace: default
spec:
  destination:
      service: "*.googleapis.com"
  ports:
      - port: 443
        protocol: https

This rule allows containers in the ‘default’ namespace to access subdomains of googleapis.com with https protocol on port 443.

The article cites two limitations,

  1. Because HTTPS connections are encrypted, the application has to be modified to use HTTP protocol so that Istio can scan the payload.
  2. This is “not a security feature“, because the Host header can be faked.

To understand these limitations, we should first examine how a HTTP request looks like.

GET /examples/apis.html HTTP/1.1
Host: www.googleapis.com
User-Agent: curl/7.35.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate

In the above example, you can see that when you type,

curl http://www.googleapis.com/examples/apis.html

in the first portion of the URL, ‘www.googleapis.com‘, does not appear in the first line of the HTTP request, which only contains the path. It shows up as the Host header. This header is where Istio looks for to match the domain defined in its network policy. If HTTPS is used, everything here is transmitted encrypted so Istio cannot find it. A compromised container can replace the header and trick Istio to allow traffic to a malicious IP.

In fact, the third limitation that the documentation doesn’t mention is that this approach only works with HTTP. We cannot define a policy to limit access to our PostgreSQL database cluster.

DNS-based Egress Container Security with NeuVector

Once the NeuVector Enforcer container is deployed on a container host, it automatically starts monitoring the container network connections on that host with DPI technology (deep packet inspection). DPI enables NeuVector to discover attacks in the network. It can also be used to identify applications and enforce a network policy at the application level when deployed as a complete Kubernetes security solution.

Once a network policy with the whitelisted DNS name(s) is pushed to the Enforcer, the Enforcer’s data path process starts to silently inspect the DNS requests issued by the containers. The Enforcer does not actively send any DNS request, so no additional network overhead is introduced by NeuVector.  It parses DNS requests and responses, and correlates the resolved IP with the network connections made by the specified containers. Based on the policy defined, the Enforcer can allow, alert or deny the network connections.

This approach works on both unencrypted and encrypted connections, and it will not be tricked by any faked headers. It can be used to enforce policies on any protocol, not just HTTP or HTTPS. More importantly, because of the DPI technology, the Enforcer can inspect the connections and make sure only the proper PostgreSQL protocol is used to access the PostgreSQL databases.

Kubernetes and OpenShift run-time container security should include control of ingress and egress connections to legacy and API-based services. While there are basic protections built into Kubernetes, Openshift, and Istio, business critical applications need enhanced security provided by Layer 7 DPI filtering to enforce egress policies based on application protocols and DNS names.

 

About the Author: Gary Duan

Gary is the Co-Founder and CTO of NeuVector. He has over 15 years of experience in networking, security, cloud, and data center software. He was the architect of Fortinet’s award winning DPI product and has managed development teams at Fortinet, Cisco and Altigen. His technology expertise includes IDS/IPS, OpenStack, NSX and orchestration systems. He holds several patents in security and data center technology.

Next time: more customers (and) in Spring time!

Wednesday, 17 January, 2018

So, after SUSECon 2017, what are my intentions for the next one? Listening to the response of all Dutch people who joined the conference we will try as hard as we can to bring more customers. They now know that it’s worthwhile to visit the event. Cause Customers as well as partners were pleasantly surprised by the breadth of our portfolio. They see our products expand and our offering getting more and more complete. We will invest more in knowledge and skills to support our partners in bringing the best solutions to our customers. We already took this path a year ago, when we hired specialists on the topic storage. We will keep on investing in complimentary skills that fit our expanding offerings. Our partners will experience we are a trustworthy and specialized vendor to work with. We will continue to train our team and – of course – our partners.

Hot topics are cloud computing, DevOps and containerisation. Our partners can expect that we will bring use cases, based on real life experience to them. I know: lots to do. But I am backed by a talented team and company.

Finally, let’s look forward because I know all of you are wondering where and when we will hold the next SUSECon. We will continue to alternate geographies, so the next SUSECon will be held in North America. Also, we will hold SUSECon in the Spring instead of the Fall—but naturally that does not mean Spring 2018, which would be right around the corner! Instead, the next SUSECon will be in Spring 2019, which means for this one time we will have a longer than usual span between events.

Right now, we are conducting the venue search for the Spring 2019 event and hope to announce the city and venue somewhere next month.

Want to know more about my SUSECon-experiences? Read my other blogs.

2017 Container Technology Retrospective – The Year of Kubernetes

Wednesday, 27 December, 2017

It is not an
overstatement to say that, when it comes to container technologies, 2017
was the year of Kubernetes. While Kubernetes has been steadily gaining
momentum ever since it was announced in 2014, it reached escape velocity
in 2017. Just this year, more than 10,000 people participated in our
free online Kubernetes Training
classes.
A few other key
data points:

  1. Our company, Rancher Labs, built a product that supported multiple
    container orchestrators, including Swarm, Mesos, and Kubernetes.
    Responding to overwhelming market and customer demands, we decided
    to build Rancher 2.0 to 100% focus
    on Kubernetes. We are not alone. Even vendors who developed
    competing frameworks, like Docker Inc. and Mesosphere, announced
    support for Kubernetes this year.
  2. It has become significantly easier to install and operate
    Kubernetes. In fact, in most cases, you no longer need to install
    and operate Kubernetes at all. All major cloud providers, including
    Google, Microsoft Azure, AWS, and leading Chinese cloud providers
    such as Huawei, Alibaba, and Tencent, launched Kubernetes as a
    Service. Not only is it easier to set up and use cloud Kubernetes
    services like Google GKE, cloud Kubernetes services are cheaper.
    They often do not charge for resources required to run the
    Kubernetes master. Because it takes at least 3 nodes to run
    Kubernetes API servers and the etcd database, cloud
    Kubernetes-as-a-Service can lead to significant savings. For users
    who still want to stand up Kubernetes in their own data center,
    VMware announced Pivotal Container Service (PKS.) Indeed, with more
    than 40 vendors shipping CNCF-certified Kubernetes distributions,
    standing up and operating Kubernetes is easier than ever.
  3. The most important sign of the growth of Kubernetes is the
    significant number of users who started to run their
    mission-critical production workload on Kubernetes. At Rancher,
    because we supported multiple orchestration engines from day one, we
    have a unique perspective of the growth of Kubernetes relative to
    other technologies. One Fortune 50 Rancher customer, for example,
    runs their applications handling billions of dollars of transactions
    every day on Kubernetes clusters.

A significant trend we observed this year was an increased focus on
security among customers who run Kubernetes in production. Back in 2016,
the most common questions we heard from our customers centered around
CI/CD. That was when Kubernetes was primarily used in development and
testing environments. Nowadays, the most common feature requests from
customers are single sign-on, centralized access control, strong
isolation between applications and services, infrastructure hardening,
and secret and credentials management. We believe, in fact, offering a
layer to define and enforce security policies will be one of the
strongest selling points of Kubernetes. There’s no doubt security will
continue to be one of the hottest areas of development in 2018. With
cloud providers and VMware all supporting Kubernetes services,
Kubernetes has become a new infrastructure standard. This has huge
implications to the IT industry. As we all know, compute workload is
moving to public IaaS clouds, and IaaS is built on virtual machines.
There is no standard virtual machine image format or standard virtual
machine cluster manager. As a result, application built for one cloud
cannot easily be deployed on other clouds. Kubernetes is a game changer.
An application built for Kubernetes can be deployed on any compliant
Kubernetes services, regardless of the underlying infrastructure. Among
Rancher customers, we already see wide-spread adoption of multi-cloud
deployments. With Kubernetes, multi-cloud is easy. DevOps team get the
benefit of increased flexibility, increased reliability, and reduced
cost, without having to complicate their operational practices. I am
really excited about how Kubernetes will continue to grow in 2018. Here
are some specific areas we should pay attention:

  1. Service Mesh gaining mainstream adoption. At the recent KubeCon
    show, the hottest topic was Service Mesh. Linkerd, Envoy, Istio,
    etc. all gained traction in 2017. Even though the adoption of these
    technologies is still at an early stage, the potential is huge.
    People often think of service mesh as a microservices framework. I
    believe, however, service mesh will bring benefits far beyond a
    microservice framework. Service mesh can become a common
    underpinning for all distributed applications. It offers application
    developers a great deal of support in communication, monitoring, and
    management of various components that make up an application. These
    components may or may not be microservices. They don’t even have to
    be built from containers. Even though not many people use service
    mesh today, we believe it will become popular in 2018. We, like most
    people in the container industry, want to play a part. We are busy
    integrating service mesh technologies into Rancher 2.0 now!
  2. From cloud-native to Kubernetes-native. The term “cloud native
    application” has been popular for a few years. It means applications
    developed to run on a cloud like AWS, instead of static environments
    like vSphere or bare metal clusters. Applications developed for
    Kubernetes are by definition cloud-native because Kubernetes is now
    available on all clouds. I believe, however, the world is ready to
    move from cloud-native to, using a term I first heard from Joe Beda,
    “Kubernetes-native“. I know of many organizations developing
    applications specifically to run on Kubernetes. These applications
    don’t just use Kubernetes as a deployment platform. They persist
    data in Kubernetes’s own etcd database. They use Kubernetes custom
    resource definition (CRD) as data access objects. They encode
    business logic in Kubernetes controllers. They use Kubelets to
    manage distributed clusters. They build their own API layer on
    Kubernetes API server. They use `kubectl` as their own CLI.
    Kubernetes-native applications are easy to build, run anywhere, and
    are massively scalable. In 2018, we will surely see more
    Kubernetes-native applications!
  3. Massive number of ready-to-run applications for Kubernetes. Most
    people use Kubernetes today to deploy their own applications. Not
    many organizations ship their application packages as YAML files or
    Helm charts yet. I believe this is about to change. Already most
    modern software (such as AI frameworks like Tensorflow) are
    available as Docker containers. It is easy to deploy these
    containers in Kubernetes clusters. A few weeks ago, Apache Spark
    project added support to use Kubernetes as a scheduler, in addition
    to Mesos and YARN. Kubernetes is now a great big-data platform. We
    believe, from this point onward, all service-side software packages
    will be distributed as containers and will be able to leverage
    Kubernetes as a cluster manager. Watch out for vast growth and
    availability of ready-to-run YAML files or Helm charts in 2018.

Looking back, growth of Kubernetes in 2017 far exceeded what all of us
thought at the end of 2016. While we expected AWS to support Kubernetes,
we did not expect the interest in service mesh and Kubernetes-native
apps to grow so quickly. 2018 could very well bring us many unexpected
technological developments. I can’t wait to find out!

Category: Uncategorized Comments closed