Automate DNS Configuration with ExternalDNS

Monday, 18 June, 2018

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

One of the awesome things about being in the Kubernetes community is the
constant evolution of technologies in the space. There’s so much
purposeful technical innovation that it’s nearly impossible to keep an
eye on every useful project. One such project that recently escaped my
notice is the ExternalDNS subproject. During a recent POC, a member of
the organization to whom we were speaking asked about it. I promised to
give the subproject a go and I was really impressed.

The ExternalDNS subproject

This subproject (the incubator process has been deprecated), sponsored
by sig-network and championed by Tim
Hockin
, is designed to automatically
configure cloud DNS providers. This is important because it further
enables infrastructure automation allowing DNS configuration to be
accomplished directly alongside application deployment.

Unlike a traditional enterprise deployment model where multiple siloed
business units handle different parts of the deployment process,
Kubernetes with ExternalDNS automates this part of the process. This
removes the potentially aggravating process of having a piece of
software ready to go while waiting for another business unit to
hand-configure DNS. The collaboration via automation and shared
responsibility that can happen with this technology prevents manual
configuration errors and enables all parties to more efficiently get
their products to market.

ExternalDNS Configuration and Deployment on AKS

Those of you who know me, know that I spent many years as a software
developer in the .NET space. I have a special place in my heart for the
Microsoft developer community and as such I have spent much of the last
couple of years sharing Kubernetes on Azure via Azure Container Service
and Azure Kubernetes Service with the user groups and meetups in the
Philadelphia region. It just so happens the persons asking me about
ExternalDNS are leveraging Azure as an IaaS offering. So, I decided to
spin up ExternalDNS on an AKS cluster. For step by step instructions and
helper code check out this
repository
.
If you’re using a different provider, you may still find these
instructions useful. Check out the ExternalDNS
repository
for
more information.

Take a deep dive into Best Practices in Kubernetes Networking
From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Congratulations! You’re Moving to a Private Cloud Infrastructure!

Wednesday, 30 May, 2018

You’Navigating to Private Cloud Infrastructureve done it! After doing your homework, you’ve decided to move your business to a private cloud and you’ve made the smart choice by choosing SUSE OpenStack Cloud. You’re now on your way to “soaring with private and hybrid cloud infrastructure.” And the recent release of SUSE OpenStack Cloud 8 gets you closer to delivering the production-ready private cloud you need than ever before.

Clear Skies Ahead?

So everything’s great and you are going to be the superhero from IT!  Or are you?

451 Research states that “There is tremendous demand for people with software-defined infrastructure expertise and experience.” Even worse, a study by McKinsey that states that “50% of IT projects fail to deliver the expected value.”  Yikes!

So with all this data showing that that the skill shortage to implement OpenStack is real, you need a plan to navigate to clear skies.  One that will not take up your team’s current time and resources, but will ensure that your private cloud infrastructure becomes a reality.  Who can you trust as you start to transform your infrastructure?

You’ve made the right choice with SUSE OpenStack Cloud but did you also know that the SUSE Services team has been trusted for over 25 years by companies just like yours? Whether you are looking for short term implementation to jumpstart your software-defined infrastructure or our newly released 12-month SUSE Select Services offering, your SUSE Services team will help get your SUSE OpenStack Cloud environment up and running in record time. So you can reap the benefits of a private cloud infrastructure that your business needs.

It’s All About the People You Trust!

Your business success depends on people; people with the right talent at the right time. But, most importantly, your business depends on the people you trust. After all, leveraging the wrong people can set you back. SUSE Services works with hundreds of companies just like yours to help guide them through their IT transformation journey.

Companies in every industry trust SUSE Services teams to facilitate all aspects of IT transformation.  Our best-in-class teams will:
Evaluate your existing processes and infrastructure to develop a plan to help you reach your desired outcomes, while maintaining security, minimizing downtime and avoiding business disruption
Leverage our broad knowledge of best practices to get the best solution for your business and solve your complex business challenges
Work hand-in-hand with your team facilitating relationships and ensuring knowledge transfer to address any skills gaps

Our teams focus on your success and strive to develop long and trusted relationships with you. After all as Steve Jobs once said, “Great things in business are never done by one person. They’re done by a team of people.”

Add SUSE Services to your team and let’s do great things together!

Enabling the point of service environment of a modern retailer

Tuesday, 29 May, 2018

The plethora of challenges confronting the retailers, in this post-Amazon era, have made it incumbent upon these retailers to transform their business models, in order to generate sustainable growth. Customer preferences and shopping behavior have been evolving at a frantic pace. This new channel-agnostic customer expects the shopping experience to be personalized and seamless across the different points of engagement which the retailer has to offer. The leading retailers are embracing this change. They are adapting their business processes and IT infrastructure to serve the needs of this evolving customer.

We at SUSE, have been leveraging open source technologies to deliver products that will support this transformation in the retail sector, across different areas in their IT infrastructure. One of those areas, and the focus of this post, is the point of service.

Gone are the days when the point of service was a simplistic cash register. The point of service systems of today offer a platform for comprehensive customer engagement, inventory management, reporting and so forth. These systems could be a standard point of service terminal, a self-checkout system, a kiosk or a mobile point of service device carried across the store by a sales executive. The reliability and stability of these systems, is more critical today than it has ever been. A loss of a system today does not merely mean a cash counter ‘going down’, which in itself is nothing short of a tragedy, in terms of the business loss that could ensue. However, a loss of a system today implies that a comprehensive engagement point in the shopper’s journey has been lost.

With the SUSE Linux Enterprise Point of Service, we offer a stable, secure and reliable operating system platform for these point of service systems. At the same time, we also recognize that something which remains a constant in this industry, is the importance of long term stability of the entire point of service stack. Frequent re-deployments of newer versions of an operating system on the point of service devices, because the older version has gone out of support, is not an ideal scenario.

In some cases, this need for stability, stems from the need to maintain the entire stack at the point of service for a long term. This is required in order to minimize the operating expenses that originate from re-deploying and setting up the point of service environment over and over again. The possible incompatibilities with the software application stack, the peripheral devices, and the effort needed to deliver that compatibility, adds to the cost pressures. In other cases, the need for stability is more a consequence of the lack of availability of a solution, that can manage the entire life-cycle of the point of service asset – from provisioning, to package management, to patching, to configuration, to monitoring and finally re-deployment.

We are delivering products that would address the concerns of both these sets of customers. For the former cases, we are committed to supporting SUSE Linux Enterprise Point of Service for extended periods, at the point of service client devices. The aim is to allow the retailers to exploit the full life-cycle of their deployed hardware, thereby maximizing the return on their investments at the point of service. The latest SUSE Linux Enterprise Point of Service release to be supported for an extended period on the client devices, is SUSE Linux Enterprise Point of Service 12 Service Pack 3. We will support this version, on certified point of service hardware, until March 31, 2025. The product life-cycle details can be found here. These 64-bit client images can be built using the SUSE Linux Enterprise Point of Service Image Server 12 component.

At the same time, with the introduction of SUSE Manager into our retail offering, we are enabling the retailers to manage their dispersed store assets, throughout the life-cycle of those assets. This product offering is called SUSE Manager for Retail, something I introduced in one of my earlier blog posts.

SUSE Manager for Retail reference architecture

So whether it is about a stable operating system that is part of the point of service stack, or a Linux systems management solution to deploy, manage and monitor that stack, SUSE has a solution to solve the point of service related problems of the modern retailer.

For more information, please visit the SUSE Linux Enterprise Point of Service and SUSE Manager for Retail product pages.

The Rise of Multimodal IT and What It Means To You

Friday, 25 May, 2018

Multimodal IT is a consequence of organizations around the world transforming their enterprise systems to embrace modern and agile technologies. In order to ensure that traditional IT environments smoothly adapt to this new technology mix, multiple infrastructures for different workloads and applications are necessary. Often, this means integrating cloud-based platforms into your enterprise systems, or merging containerized development with traditional development, or combining legacy applications with microservices.

Multimodal IT = A co-existence of traditional infrastructure, software-defined infrastructure and application oriented architectures.

Multimodal IT Infographic

The dictionary meaning of mul·ti·mod·al – “characterized by several different modes of activity or occurrence”. In order to understand multimodal IT, let’s first define “mode” in context of IT.

Mode simply implies a type of IT infrastructure and associated set of processes. Gartner uses the Bimodal concept to illustrate the existence of two types of IT – Mode 1 and Mode 2.

  • Mode 1 IT typically implies traditional IT infrastructure, waterfall or ITIL processes, and long-cycle times (order of months or years).
  • Mode 2 IT on the other hand implies software-defined infrastructure, agile technology, agile methods such as Scrum, DevOps methods, and short cycle times (order of days or weeks).

 

Multimodal IT suggests that along with Traditional Infrastructure (Mode 1) and Software-Defined Infrastructure (Mode 2) there are variations and combinations of mode 1 and mode 2. So an IT organization could have a traditional infrastructure using ITIL processes, a software-defined infrastructure using DevOps, or a mix of infrastructures that is undergoing digital transformation, where some aspects are traditional and other aspects are software-defined. Other variations may involve moving agile workloads across a traditional on-premise infrastructure and a public cloud.

Many organizations find themselves undergoing a journey of IT transformation. They have a traditional IT infrastructure with physical servers or virtualized servers, running monolithic or N-tier applications and use waterfall development processes. As they transform, some of the on-premise workloads and servers get moved to the cloud. The legacy apps are containerized directly or get converted to microservices. As a result, the organization finds itself using a mix of traditional infrastructure and software-defined infrastructure, which is essentially a multimodal IT scenario.

Let’s take a look at few Multimodal IT scenarios.

Multimodal scenario 1 – Mixture of IT infrastructure:

Servers reside within traditional infrastructure and applications run on software-defined infrastructure

If you are running databases (SQL, Oracle, SAP, etc.), it is likely that these are running on a traditional infrastructure. However, you may have started to change the front-end applications to be designed using microservices. The base business logic could be in containers. The analytics performed on underlying data could also be in containers. So the containers running the microservices applications are on the software-defined infrastructure while accessing the back-end databases over traditional infrastructure. The IT team, in this case, gets the reliability and security of the traditional infrastructure while leveraging the benefits of containers for customer-facing value-add applications. In essence, the business applications run on software-defined infrastructure and access the back-end databases housed in the traditional infrastructure.

Multimodal Scenario 1

 

Multimodal scenario 2 – Mobility of application workloads across mixed IT infrastructure:

Move workloads across traditional and software-defined infrastructure

A development team starts with developing container workloads on a traditional on-premise enterprise server that runs a container engine on a physical or virtual server. As the project progresses the scope of the containers grows. The number of containers starts running into the thousands. In order to support the scale and required orchestration of these containers, the workloads may move to Kubernetes and a software-defined infrastructure where the compute, network and storage can be easily provisioned and deployed. The team may move the workloads transparently from on-premise servers to the cloud and vice-versa for testing and production. Consequently, the development team could end up using both traditional servers as well as a software-defined infrastructure to maximize its development efficiency.

 

Multimodal scenario 3 – Mixture of Processes:

Processes of traditional infrastructure used for technology of software-defined infrastructure

A high-tech IT company starts using containers for designing new applications and transforming parts of current monolithic apps. The data center at this organization is mostly using a traditional infrastructure and long-term support cycles. Therefore the data center administrator requires the container engine to be supported for several years. In general, the container engine and container apps follow a continuous integration/continuous deployment model with update cycles that are on the order of days or weeks or few months. However, in this case, the support cycle for a software-defined technology is expected in line with a traditional infrastructure. In essence, the company applies the upgrade processes of the traditional IT data center to the container engine, which is typically used with agile methods of software-defined infrastructure, creating a mixed set of processes.

 

Multimodal scenario 4 – Mix of deployment scenarios:

A variety of deployment scenarios co-exist spanning traditional and software-defined infrastructure

An IT team uses a traditional IT infrastructure for security and reliability. The servers provide uninterrupted service for many years with very few major upgrades. The applications here are not required to be moved around. Another team is supporting a customer facing e-commerce application that is continuously updated with features and bug fixes. The workloads are deployed on a cloud to support scalability and flexibility. Also, the team requires extra security for some of its applications and uses a virtualized infrastructure to run containers. Yet another team is responsible for analytics and houses large amounts of data, using an OpenStack cloud to manage the compute, storage and network resources across the IT organization.

 

The above scenarios illustrate a few common use cases of Multimodal IT. This brings us to the next step.

How do we address the needs of Multimodal IT?

The starting point is to recognize that the organization has Multimodal needs. Different types of IT infrastructures may exist with unique requirements both in terms of technologies and processes involved.

The mixed IT infrastructure as a whole delivers much value to the business. Recognizing the different types of IT can lead to maximizing the value for each of the types (modes) of IT. Silos of IT domains can sometimes prevent you from fully maximizing the benefits of Multimodal IT. The goal is to break the silos using software-defined infrastructures and processes and, consequently, drive maximum benefits out of the overall Multimodal IT environment for the business. The benefits can vary depending upon the business goals, e.g., increase efficiency, optimize costs, improve development, improve maintenance, etc.

The challenges of Multimodal IT

It is relatively easy to show how different tools and setups help with the specific business needs and validate an investment because of the return-on-investment (ROI) for adapting the right technology.

However, the challenge for IT managers becomes how to implement with the available human resources and given skills. New head count might not get approved, training time might interrupt operations or delay the deployment of new tools. It is important to search for a platform and partner supporting multimodal IT without the need for new training or too many additional skills.

Build bridges across Multimodal IT

One of the approaches to derive benefits of a Multimodal IT environment is by building bridges across the different modes of IT in an organization. You can modernize traditional infrastructure and, at the same time, protect your IT investment by avoiding the disruptive approach of rip-and-replace. By bridging traditional and software-defined approaches, you can protect your current investment in the traditional infrastructure and incrementally transform or adapt new technology.

Use a platform that supports multimodal IT

Multimodal IT creates a new set of requirements from the underlying operating system platform. An operating system designed for Multimodal IT is called a Multimodal Operating System (OS). The multimodal OS provides the foundation so that traditional applications keep running, software-defined components are built seamlessly and application-oriented architectures are supported. The multimodal OS bridges the traditional and software-defined infrastructure and helps break the silos.

In the coming days and weeks we will explore, through a series of blogs, the various aspects of Multimodal IT and a Multimodal OS.

Stay tuned @RajMeel7

Here are some quick references

 

SUSE & SAP S/4HANA @ SAPPHIRE NOW 2018: “Opening” June 5

Monday, 21 May, 2018

The perennial venue for SAP’s SAPPHIRE NOW Conference – the Orange County Convention Center in Orlando

That SAP is friendly to the Open Source software community isn’t really news, right? Going back more than a decade and a half, there are countless examples that show SAP exploring, testing and certifying their solutions on Linux variants. SAP developers are indeed important contributors to a number of open source projects. SAP has adopted OpenStack and Ceph as foundational elements for SAP Cloud Platform and has joined Cloud Foundry Foundation in order to keep pace with and contribute to the evolution of open source software-defined infrastructure technologies such as containers and orchestration.

SAP S/4HANA, not to mention Leonardo, provide further examples of SAP’s core adoption of open source to take on the role of providing the underpinnings for their licensed software business. This greatly simplifies the support matrix that SAP and its partners have to consider. Supporting only Linux for HANA and, by extension, for S/4HANA eliminates many pre-release testing, qualification and ongoing support concerns for SAP.  We at SUSE appreciate this important evolution. By appreciate, I mean that we understand the business decision that SAP has made *and* we recognize the value this brings to our business.

As the provider of perhaps the most widely-used set of business applications on the planet, SAP has built a vast and impressive ecosystem in which other information technology companies (like SUSE) participate. Certainly all the big server, storage and network equipment makers play here, as do hundreds of service providers and thousands of ISVs and smaller integrators. But quite possibly no other technology company in recent history has shown the depth of commitment to SAP and SAP’s customers’ mission that SUSE has.

SUSE’s commitment to SAP customers was in evidence in our release of SUSE Linux Enterprise for SAP Applications in 2012. And in our close collaboration with hardware vendors such as HPE, IBM, Lenovo, Cisco, Dell EMC, Fujitsu and others to deliver SAP HANA appliances and to help smooth the path to HANA adoption early on. And more recently, in our important developments in the areas of open source systems management and zero downtime for SAP landscapes, including our integrated high availability solutions. We are fully committed to bring the benefits of open source – and SUSE’s unique approach to the SAP market – to every SAP customer.  This started with R/3 & NetWeaver/ECC on Oracle and DB2 and continues today with HANA, S/4HANA, BW4/HANA and Leonardo and all that entails.

If you are able to attend SAPPHIRE in Orlando on June 5-7, please make it a point to come see us in the S/4HANA kiosk in the huge SUSE booth near the dining hall at location #859.  We’ll be ready, willing and able to go into as much detail as you like there.  And/or check out our comprehensive set of materials on SUSE and SAP on https://www.suse.com/programs/transitioningtosap  – and thanks.

Greetings from “The Big Easy” and Nutanix .NEXT

Friday, 11 May, 2018

We’re just finishing up here at the Nutanix .NEXT event in New Orleans, and I have to say – what a city, and what an event!

Here in “The Big Easy”, as New Orleans is affectionately called, Nutanix has been showing partners and customers how easy it is to deploy and gain benefit from “hyper-converged, software-defined and platform-ready business infrastructure – that revolutionizes computing as we know it”, as described by Inder Sidhu, EVP global customer success and business operations at Nutanix.

Inder explains that the key to Nutanix’ success thus far is in large part based on an understanding of how convergence provides better value. The example he points to is Apple. He refers to when comedian Jerry Seinfeld once jokingly commented that the iPhone is wrongly named. “Why do they call it an iPhone? No one uses it to call. We just text and email and browse and take pictures.” Exactly! The iPhone is so much more than a phone. It’s a camera, video recorder, music player, Internet browser, email device, GPS navigator, and more. It’s a workout companion, alarm clock and weather forecaster in the palm of your hand.

Convergence is a big reason why, and Nutanix is really doing the same thing. Nutanix combines network infrastructure components that customers typically buy from several companies such as servers, storage devices and virtualization software, and make it all work together in a much better, seamless integrated way. What could be easier?

So, what is SUSE bringing to the mix?

Well, let’s say you want to gain the benefits of private cloud infrastructure, such as available with OpenStack, combined with the hyper-convergence benefits of Nutanix. SUSE and Nutanix have got you covered! SUSE OpenStack Cloud simplifies OpenStack complexity and Nutanix removes data center infrastructure complexity.

Let’s say you want to run your Linux applications and workloads on Nutanix. Well, you’d undoubtedly want to deploy on THE enterprise Linux certified and supported on AHV – and that would be SUSE Linux Enterprise Server.

Now that’s a Big Easy, if you ask me.

For existing Nutanix customers, SUSE Linux Enterprise and SUSE OpenStack Cloud provide the enterprise-grade open source alternatives that include open APIs to increase flexibility, portability and a guard against vendor lock-in. For existing SUSE customers, Nutanix provides the ideal option for infrastructure deployment where customers want the simplicity that hyper-convergence provides, and the value represented by a single, flexible, unified software plane – a single unified stack. With Nutanix and SUSE, customers are freed to use any combination of technologies they choose. They no longer have to spend their time and energy thinking about infrastructure; instead they can focus on the business applications that take their organizations forward.

Learn more about Nutanix and SUSE. Get details on Nutanix and SUSE OpenStack Cloud.

Open. Redefined. The 2018 London SUSE Expert Day

Wednesday, 25 April, 2018

Yesterday was the SUSE Expert Day in London. Held at the historic Churchill War Rooms, part of the Imperial War Museum and a mere stone’s throw away from Downing Street and the Houses of Parliament. This underground nerve centre served not just as the base from which Winston Churchill and his inner circle directed the Second World War, but also sheltered them from the bombing raids.

It was easy to lose your way in the labyrinthine tunnels, or to get distracted by the many exhibits surrounding you, but once we found our way to the auditorium, the day began, as all good days do, with great coffee, bacon rolls and pastries.

Such is the popularity of the SUSE Expert Days that event registration had to be closed early, and the room was filled with attendees from enterprises around the country – technology companies, solutions providers, cloud service providers and even education establishments had gathered to learn more about the open, open source company.

Technology trends shaping our world

Jeff Kirkpatrick, our UK Alliances Manager took the stage for the opening keynote, talking about the technology trends shaping our world, the explosion of open source projects available that support these technology trends, and how best to begin when looking to use open source technologies. Whether you are interested in AI/machine learning, Internet of Things (IoT), Big Data/Analytics, Blockchain or Software-Defined anything (SDx), there is inevitably an open source project that can make it easier, and more cost effective to get started. The IT skills gap remains an issue for most businesses though, which is why it is important to find a trusted partner with proven expertise in open source.

Software-Defined Infrastructure – the future of the data center

Throughout the day, one theme remained constant – Software-Defined Infrastructure (SDI) is the key to innovation and remaining relevant in this ever-evolving world. While many businesses turn to DevOps to enable constant innovation, this needs the agility, flexibility and cost-efficient route to market that SDI provides. Whether you’re experimenting with microservices and containers, building a private cloud, or looking into ways to store the vast amounts of data that enterprises today are generating and needing to analyse, SDI is the way to do it. The SUSE 2017 Global Cloud Research backs this up, with 95% of IT decision makers believing that SDI is the future of the data center.

OpenStack – the heart of Software-Defined Infrastructure

The most important thing when looking at SDI is to ensure that you have an efficient orchestration layer that can interoperate with and manage all of the component parts – storage, compute, containers, networking, etc. That’s where OpenStack comes in. As a keen advocate for OpenStack, I would say this, but having one, simple-to-use interface with a common API to control your Software-Defined Infrastructure makes sense.

The latest version of SUSE OpenStack Cloud is due for release soon, and we’re looking forward to sharing more about it in due course, but in the meantime, the ever popular Sam the IT Admin has been learning more about SUSE OpenStack Cloud. Check out her latest adventures, and if you’re coming to the Vancouver OpenStack Summit next month, come to booth B10 and say hello. While Sam the IT Admin sadly won’t be there, we’ll have a team of friendly open source experts to chat to and quite possibly some stuffed chameleons to give away. No chameleons were harmed in the making of the SUSE booth or giveaways and any resemblance to actual chameleons (other than Geeko) living or dead is entirely coincidental.

If you’re near Dublin, then the Expert Day team will be putting on another great show there on the 26th April, and in Valencia on the 10th of May. You deserve victory in this ever-changing digital world, let SUSE help you achieve your goals.

SUSE Sponsors IBM Systems Technical University in Orlando

Wednesday, 25 April, 2018

For the nearly three years that I’ve been at SUSE, I’ve had the privilege of participating in five different IBM Systems Technical University events in Europe and the US. It’s always been a great opportunity not only to highlight the latest SUSE innovations available for the IBM Power platform but also to talk to IBMers, customers and Business Partners about their interests and challenges.  Well, SUSE’s back as a sponsor for the IBM Systems Technical University in Orlando April 30 – May 4 and I can’t wait!

As always we’ll have a kiosk in the Solution Center. It’s number 46 on the floor plan. Here we can tell you why SUSE Linux Enterprise Server for SAP Applications is still the leading OS for running your POWER8- and POWER9-based SAP HANA systems today, and as you look to the future with SAP S/4HANA. We’ll have our usual assortment of fun giveaways (yes, there will be chameleon plushies!) and prize drawings every day.

But the really good stuff is in our three breakout sessions, where you get to hear from experts about the latest SUSE solutions for IBM Power Systems. Be sure to add these to your conference agenda:

l102568: Configuration and patch management for SAP HANA with SUSE Manager and Live Patching

Monday, April 30th at 1:45 in Lake Down B

In this session, Mike Friesenegger and I will talk about the newest add-on features for your SAP HANA on Power systems, SUSE Manager and SUSE Linux Enterprise Live Patching. This presentation gives you information about the capabilities and operation of these products on Power servers and includes live demonstrations.

l101264: Critical tips to set up SAP HANA high availability

Tuesday, May 1st at 4:30 in Orange Ballroom B and                                                Thursday, May 3rd at 4:30 in Florida Ballroom 1

This session is presented by Mike Friesenegger, SUSE Technical Strategist and all-around expert on IBM systems and SAP solutions. He’ll take you through some technical details that you need to know when setting up various options for SAP HANA system high availability, including the automated recovery capabilities we pioneered for Systems Replication.

l101201: Never reboot SAP HANA on IBM Power Systems? Not quite, but we are getting closer

Wednesday, May 2nd at 1:45 in Orange Ballroom E

Jay Kruemcke is SUSE’s IBM Champion and OpenPower Board Member, and in his spare time, he’s also the Product Manager for SUSE Linux Enterprise Server for POWER. He’ll be giving an overview of what SUSE is delivering to support high performance, high availability and ease of management for SAP operations on IBM Power.

We’ll see you in Orlando!

Follow me on Twitter: @MichaelDTabron

The Ultimate Guide to Kubernetes Security

Wednesday, 18 April, 2018

By Fei Huang and Gary Duan

Containers and tools like Kubernetes enable enterprises to automate many aspects of application deployment, providing tremendous business benefits. But these new deployments are just as vulnerable to attacks and exploits from hackers and insiders as traditional environments, making Kubernetes security a critical component for all deployments. Attacks for ransomware, crypto mining, data stealing and service disruption will continue to be launched against new container based virtualized environments in both private and public clouds.

To make matters worse, new tools and technologies like Docker and Kubernetes will themselves be under attack in order to find ways into an enterprise’s prized assets. The recent Kubernetes exploit at Tesla is just the first of many container technology based exploits we will see in the coming months and  years.

The hyper-dynamic nature of containers creates the following Kubernetes security challenges:

  • Explosion of East-West Traffic. Containers can be dynamically deployed across hosts or even clouds, dramatically increasing the east-west, or internal, traffic that must be monitored for attacks.
  • Increased Attack Surface. Each container may have a different attack surface and vulnerabilities which can be exploited. In addition, the additional attack surface introduced by container orchestration tools such as Kubernetes and Docker must be considered.
  • Automating Security to Keep Pace. Old models and tools for security will not be able to keep up in a constantly changing container environment.

In order to assess the security of your containers during run-time, here are a few security related questions to ask your Kubernetes team:

  • Do you have visibility of Kubernetes pods being deployed? For example how the application pods or clusters are communicating with each others?
  • Do you have a way to detect bad behavior in east/west traffic between containers?
  • Are you able to monitor what’s going on inside a pod or container to determine if there is a potential exploit?
  • Have you reviewed access rights to the Kubernetes cluster(s) to understand potential insider attack vectors?

 

For security teams, it’s critical to automate the security process so it doesn’t slow down the DevOps and application development teams. Kubernetes security teams should be able to answer these questions for containerized deployments:

  • How can you shorten the security approval process for your developers to get new code into production?
  • How do you simplify security alerts and operations team monitoring to pin-point the most important attacks requiring attention?
  • How do you segment particular containers or network connections in a Kubernetes environment?

Before we talk about Kubernetes security, let’s review the basics of what Kubernetes is.

Kubernetes 101

Kubernetes is a container orchestration tool which automates the deployment, update, and monitoring of containers. Kubernetes is supported by all major container management and cloud platforms such as Red Hat OpenShift, Docker EE, Rancher, IBM Cloud, AWS EKS, Azure, SUSE CaaS, and Google Cloud. Here are some of the key things to know about Kubernetes:

  • Master Node. The server which manages the Kubernetes worker node cluster and the deployment of pods on nodes.
  • Worker Node. Also known as slaves or minions, these servers typically run the application containers and other Kubernetes components such as agents and proxies.
  • Pods. The unit of deployment and addressability in Kubernetes. A pod has its own IP address and can contain one or more containers (typically one).
  • Services. A service functions as a proxy to its underlying pods and requests can be load balanced across replicated pods.
  • System Components. Key components which are used to manage a Kubernetes cluster include the API Server, Kubelet, and etcd. Any of these components are potential targets for attacks. In fact, the recent Tesla exploit attacked an unprotected Kubernetes console access to install crypto mining software.

Kubernetes Networking Basics

The main concept in Kubernetes networking is that every pod has its own routable IP address. Kubernetes (actually, its network plug-in) takes care of routing all requests internally between hosts to the appropriate pod. External access to Kubernetes pods can be provided through a service, load balancer, or ingress controller, which Kubernetes routes to the appropriate pod.

Pods communicate with each other over the network overlay, and load balancing and DNAT takes place to get the connections to the appropriate pod. Packets may be encapsulated with appropriate headers to get them to the appropriate destination, where the encapsulation is removed.

With all of this overlay networking being handled dynamically by Kubernetes, it is extremely difficult to monitor network traffic, much less secure it.

 

What to Be Aware Of: Kubernetes Vulnerabilities and Attack Vectors

Attacks on Kubernetes containers running in pods can originate externally through the network or internally by insiders, including victims of phishing attacks whose systems become conduits for insider attacks. Here are a few examples:

  1. Container compromise. An application misconfiguration or vulnerability enables the attacker to get into a container to start probing for weaknesses in the network, process controls, or file system.
  2. Unauthorized connections between pods. Compromised containers can attempt to connect with other running pods on the same or other hosts to probe or launch an attack. Although Layer 3 network controls whitelisting pod IP addresses can offer some protection, attacks over trusted IP addresses can only be detected with Layer 7 network filtering.
  3. Data exfiltration from a pod. Data stealing is often done using a combination of techniques, which can include a reverse shell in a pod connecting to a command/control server and network tunneling to hide confidential data.

The Most Damaging Attacks Have a ‘Kill Chain’

The most damaging attacks often involve a kill chain, or series of malicious activities, which together achieve the attackers goal. These events can occur rapidly, within a span of seconds, or can be spread out over days, weeks or even months.

Detecting events in a kill chain requires multiple layers of security monitoring, because different resources are used. The most critical vectors to monitor in order to have the best chances of detection in a production environment include: 

  • Network inspection. Attackers typically enter through a network connection and expand the attack via the network. The network offers the first opportunity to an attack, subsequent opportunities to detect lateral movement, and the last opportunity to catch data stealing activity.
  • Container monitoring. An application or system exploit can be detected by monitoring the process, syscall, and file system activity in each container to determine if a suspicious process have started or attempts are being made to escalate privileges and break out of the container.
  • Host security. Here is where traditional host (endpoint) security can be useful to detect exploits against the kernel or system resources. However, host security tools must also be Kubernetes and container aware to ensure adequate coverage.

In addition to the vectors above, attackers can also attempt to compromise deployment tools such as the Kubernetes API Server or console to gain access to secrets or be able to take control of running pods.

Attacks on the Kubernetes Infrastructure Itself

In order to disable or disrupt applications or gain access to secrets, resources, or containers, hackers can also attempt to compromise Kubernetes resources such as the API Server or Kubelets. The recent Tesla hack exploited an unprotected console to gain access to the underlying infrastructure and run crypto mining software.

One example is when the API Server token is stolen/hacked, or identity is stolen to be able to access the database by impersonating the authorized user can deploy malicious containers or stop critical applications from running.

By attacking the orchestration tools themselves, hackers can disrupt running applications and even gain control of the underlying resources used to run containers. In Kubernetes there have been some published privilege escalation mechanisms, via the Kubelet, access to etcd or service tokens, which can enable an attacker to gain cluster admin privilege rights from a compromised container.

Preparing Kubernetes Worker Nodes for Production

Before deploying any application containers the host systems for the Kubernetes worker nodes should be locked down. Here are the most effective ways to lock down the hosts.

Recommended Pre-Deployment Security Steps

  • Use namespaces
  • Restrict Linux capabilities
  • Enable SELinux
  • Utilize Seccomp
  • Configure Cgroups
  • Use R/O Mounts
  • Use a minimal Host OS
  • Update system patches
  • Run CIS Benchmark security tests

Real-Time, Run-Time Kubernetes Security

Once containers are running in production, the three critical security vectors for protecting them are network filtering, container inspection, and host security.

Inspect and Secure the Network

A container firewall is a new type of network security product which applies traditional network security techniques to the new cloud-native Kubernetes environment. There are different approaches to securing a container network with a firewall, including:

  • Layer 3/4 filtering, based on IP addresses and ports. This approach includes Kubernetes network policy to update rules in a dynamic manner, protecting deployments as they change and scale. Simple network segmentation rules are not designed to provide the robust monitoring, logging, and threat detection required for business critical container deployments, but can provide some protection against unauthorized connections.
  • Web application firewall (WAF) attack detection can protect web facing containers (typically HTTP based applications) using methods that detect common attacks, similar to the functionality web application firewalls. However, the protection is limited to external attacks over HTTP, and lacks the multi-protocol filtering often needed for internal traffic.
  • Layer-7 container firewall. A container firewall with Layer 7 filtering and deep packet inspection of inter-pod traffic secures containers using network application protocols. Protection is based on application protocol whitelists as well as built-in detection of common network based application attacks such as DDoS, DNS, and SQL injection. Container firewalls also are in a unique position to incorporate container process monitoring and host security into the threat vectors monitored.

Deep packet inspection (DPI) techniques are essential for in-depth network security in a container firewall. Exploits typically use predictable attack vectors: malicious HTTP requests with a malformed header, or inclusion of an executable shell command within the extensible markup language (XML) object. Layer 7 DPI based inspection can look for and recognize these methods. Container firewalls using these techniques can determine whether each pod connection should be allowed to go through, or if they are a possible attack which should be blocked.

Given the dynamic nature of containers and the Kubernetes networking model, traditional tools for network visibility, forensics, and analysis can’t be used. Simple tasks such as packet captures for debugging applications or investigating security events are not simple any more. New Kubernetes and container aware tools are needed to perform network security, inspection and forensic tasks.

Container Inspection

Attacks frequently utilize privilege escalations and malicious processes to carry out an attack or spread it. Exploits of vulnerabilities in the Linux kernel (such as Dirty Cow), packages, libraries or applications themselves can result in suspicious activity within a container.

Inspecting container processes and file system activity and detecting suspicious behavior is a critical element of container security. Suspicious processes such as port scanning and reverse shells, or privilege escalations should all be detected. There should be a combination of built-in detection as well as a baseline behavioral learning process which can identify unusual processes based on previous activity.

If containerized applications are designed with microservices principles in mind, where each application in a container has a limited set of functions and the container is built with only the required packages and libraries, detecting suspicious processes and file system activity is much easier and accurate.

Host Security

If the host (e.g. Kubernetes worker node) on which containers run is compromised, all kinds of bad things can happen. These include:

  • Privilege escalations to root
  • Stealing of secrets used for secure application or infrastructure access
  • Changing of cluster admin privileges
  • Host resource damage or hijacking (e.g. crypto mining software)
  • Stopping of critical orchestration tool infrastructure such as the API Server or the Docker daemon
  • Starting of suspicious processes mentioned in the Container Inspection section above

Like containers, the host system needs to be monitored for these suspicious activities. Because containers can run operating systems and applications just like the host, monitoring container processes and file systems activity requires the same security functions as monitoring hosts. Together, the combination of network inspection, container inspection, and host security offer the best way to detect a kill chain from multiple vectors.

Securing the Kubernetes System and Resources

Orchestration tools such as Kubernetes and the management platforms built on top of it can be vulnerable to attacks if not protected. These expose potentially new attack surfaces for container deployments which previously did not exist, and thus will be attempted to be exploited by hackers. The recent Tesla hack and Kubelet exploit are just the start of the continuing cycle of exploit/patch that can be expected for new technologies.

In order to protect Kubernetes and management platforms themselves from attacks it’s critical  to properly configure the RBACs for system resources. Here are the areas to review and configure for proper access controls.

  1. Protect the API Server. Configure RBAC for the API Server or manually create firewall rules to prevent unauthorized access.
  2. Restrict Kubelet Permissions. Configure RBAC for Kubelets and manage certificate rotation to secure the Kubelet.
  3. Require Authentication for All External Ports. Review all ports externally accessible and remove unnecessary ports. Require authentication for those external ports needed. For non-authenticated services, restrict access to a whitelist source.
  4. Limit or Remove Console Access. Don’t allow console/proxy access unless properly configured for user login with strong passwords or two-factor authentication.

When combined with robust host security as discussed before for locking down the worker nodes, the Kubernetes deployment infrastructure can be protected from attacks. However, it is also recommended that monitoring tools should be used to track access to infrastructure services to detect unauthorized connection attempts and potential attacks.

For example, in the Tesla Kubernetes console exploit, once access to worker nodes was compromised, hackers created an external connection to China to control crypto mining software. Real-time, policy based monitoring of the containers, hosts, network and system resources would have detected suspicious processes as well as unauthorized external connections.

Auditing and Compliance for Kubernetes Environments

With the rapid evolution of container technology and tools such as Kubernetes, enterprises will be constantly updating, upgrading, and migrating the container environment. Running a set of security tests designed for Kubernetes environments will ensure that security does not regress with each change. As more enterprises migrate to containers, the changes in the infrastructure, tools, and topology may also require re-certification for compliance standards like PCI.

Fortunately, there are already a comprehensive set of Kubernetes security and Docker security checks through the CIS Benchmarks for Kubernetes and the Docker Bench tests. Regularly running these tests and confirming expected results should be automated.

These test focus on the following areas:

  • Host security
  • Kubernetes security
  • Docker daemon security
  • Container security
  • Properly configured RBACs
  • Securing data at rest and in transit

Vulnerability scanning of images and containers in registries and in production is also a core component for preventing known exploits and achieving compliance. But, vulnerability scanning is not enough to provide the multiple vectors of security needed to protect container runtime deployments.

To learn how to automate security into your Build, Ship, Run processes, see this post on Continuous Container Security.

Run-Time Kubernetes Security – The NeuVector Multi-Vector Container Firewall

Orchestration and container management tools are not designed to be security tools, even though they provide basic RBACs and infrastructure security features. For business critical deployments, specialized Kubernetes security tools are needed for run-time protection. Specifically, a security solution should address security concerns across the three primary security vectors: network, container and host.

NeuVector is a highly integrated, automated security solution for Kubernetes, with the following features:

  • Multi-vector container security addressing the network, container, and host.
  • Layer 7 container firewall to protect east-west and ingress/egress traffic.
  • Container inspection for suspicious activity.
  • Host security for detecting system exploits.
  • Automated policy and adaptive enforcement to auto-protect and auto-scale.
  • Run-time vulnerability scanning for any container or host in the Kubernetes cluster.
  • Compliance and auditing through CIS security benchmarks.

The NeuVector solution is a container itself which is deployed and updated with Kubernetes or any orchestration system you use such as OpenShift, Rancher, Docker EE, IBM Cloud, SUSE CaaS, EKS etc. To learn more, please request a demo of NeuVector.

Open Source Kubernetes Security Tools

While commercial tools like the NeuVector container firewall offer multi-vector protection and visibility, there are open source projects which continue to evolve to add security features. Here are some of them to be considered for projects which are not as business critical in production.

  • Network Policy. Kubernetes Network Policy provides automated segmentation by IP address.
  • Istio. Istio creates a service mesh for managing service to service communication, including routing, authentication, and encryption, but is not designed to be a security tool to detect attacks and threats.
  • Grafeas. Grafeas provides a tool to define a uniform way for auditing and governing the modern software supply chain.
  • Clair. Clair is a simple tool for vulnerability scanning of images, but lacks registry integration and workflow support.
  • Kubernetes CIS Benchmark. The compliance and auditing checks from the CIS Benchmark for Kubernetes Security are available to use. The NeuVector implementation of these 100+ tests is available here.

Don’t Wait Until You’re In Production – Deploy Kubernetes with Confidence, Securely

The rapid pace of application deployment and the highly automated run-time environment enabled by tools like Kubernetes makes it critical to consider run-time Kubernetes security automation for all business critical applications. It’s not enough to scan images in registries and harden containers and hosts for run-time. Staying out of the latest data breach, ransomware, and Kubernetes exploit headlines requires a layered security strategy which covers as many threat vectors as possible.