5 lessons from the Lighthouse Roadshow in 2019

Thursday, 5 December, 2019

Having completed a series of twelve Lighthouse Roadshow events across Europe and North America over the past six months, I’ve had time to reflect on what I’ve learnt about the rapid growth of the Kubernetes ecosystem, the importance of community and my personal development.

For those of you who haven’t heard of the Lighthouse series before, Rancher Labs first ran this roadshow in 2018 with Google, GitLab and Aqua Security. The theme was ‘Building an Enterprise DevOps Strategy with Kubernetes’. After selling out six venues across North America, I felt that its success could be repeated in Europe. We tested this theory in May by running the first 2019 Lighthouse in Amsterdam with Microsoft, GitHub and Aqua Security. The event sold out in just two weeks, and we had to move to a larger venue downtown to accommodate a growing waiting list.

Bas Peters from Github presenting

Bas Peters from GitHub at Lighthouse Amsterdam – 16th May 2019

After the summer vacation period, the European leg of the Lighthouse re-started in earnest with events in Munich and Paris on consecutive days. The Paris event turned out to be the largest of the roadshow. Held at Microsoft’s magnificent Paris HQ, we packed their main auditorium with almost 300 delegates. In the weeks that followed the Lighthouse team also visited Copenhagen, London, Oslo, Helsinki, Stockholm and, finally, Dublin. Not to be outdone, Rancher’s US team organised a further three Lighthouse events with partners Amazon, GitLab, and Portworx during November.

Now home, sitting at my desk and reflecting on the lessons learnt, I’ve distilled them down to the following:

Focus on context not product pitches

Organizing the content for so many consecutive events with many different speakers was a significant challenge. We had a mix of sales guys, tech evangelists, consultants and field sales engineers presenting. Those speakers that received the best response (and exchanged the most business cards during the coffee breaks) always delivered insight into the context in which their products exist. I share this lesson because I want to encourage those running similar events in this space to understand the value of insight. This is particularly true if you work for a company that doesn’t charge anything for their technology. In a market where there are no barriers to adopting software, the only way you can genuinely differentiate is the quality of the story you tell and the expert insight that you deliver.

Alain Helaili from GitHub at Lighthouse Paris – 11th Oct 2019

Alain Helaili from GitHub at Lighthouse Paris – 11th Oct 2019

Interest in Kubernetes is exploding

Of the almost 3000 IT professionals who registered for the roadshow globally, more than half are already using Kubernetes in production. So, what makes the excitement around Kubernetes different from previous hype-cycles? I would contend there are two principal differences:

  1. Low barrier to entry – Kubernetes takes minutes to install on-prem or in the cloud. I regularly see enthusiastic sales and marketing people launching their first cluster in the public cloud. Compare that to something like OpenStack which, despite the existence of a variety of installers on the market, is hellish to get up and running. Unless you have access to skilled consultants from the beginning, the technical bar is set so high that only the most sophisticated teams can be successful.
  2. Mature and proven – Kubernetes has, in one form or another, been around for over ten years orchestrating containers in the world’s largest IT infrastructures. Google introduced the Borg around 2004. Borg was a large-scale internal cluster management system, which ran many thousands of different applications, across many clusters, each with up to tens of thousands of machines. In 2014 the company released Kubernetes as an open-source version of Borg. Since then, hundreds of thousands of enterprises have deployed Kubernetes into production with all the public clouds now offering managed varieties of their own. Google rightly concluded that a rising tide would float all ships (and use more cloud compute!). Today Kubernetes is mature, proven and used everywhere. Sadly, you can’t say the same about OpenStack.

Tom Callway from Rancher presenting

Yours truly opening proceedings at Lighthouse Munich – 10th Oct 2019

Enterprises are still asking the same questions

While the adoption of Kubernetes is undeniably the most significant phenomenon in IT operations since virtualization, those enterprises that are considering it are asking the same questions as before:
1. Who should be responsible for it?
2. How does it fit into our cloud strategy?
3. How do we tie it into our existing services?
4. How do we address security?
5. How do we encourage broader adoption?

In what is still a relatively nascent market, its challenging questions like these that need to be answered by Kubernetes advocates transparently and in person if they are to be taken seriously. The stakes are high for early adopters, and they need assurance that the advice you offer is real, tangible and trusted by others. That’s why we created the Lighthouse Roadshow.

Bas Peters from Github presenting

Olivier Maes from Rancher Labs at Lighthouse Copenhagen – 31st Oct 2019

Community matters

Unless the ecosystem around new technology is open and well-governed, it will die. Companies or individuals that reject community members as freeloaders are consigning themselves to irrelevance. You can always find some people who are willing to jump through the hoops of licensing management or lock themselves into a single vendor. Still, most of today’s B2B tech consumers are looking to make their choices based on third-party validation. Community members may not pay for your software, but they contribute to your growth by endorsing your brand and sharing their own success stories.

The Lighthouse Roadshow is 100% community driven. We’re not interested in making a profit from ticket sales preferring instead to see how well our stories resonate with delegates. The more insight delivered, the more successful the event. The feedback from each of the Lighthouse venues has been hugely rewarding and the opportunities for growth have been incalculable. We couldn’t have achieved this if we just measured our success by tracking the conversion rate of delegate numbers to MQLs and close won opportunities.

Steve Giguere from Aqua Security at Lighthouse London

Steve Giguere from Aqua Security at Lighthouse London – 8th Nov 2019

Surrounding yourself with talent makes you better

It’s widely known that one of the best ways to improve on a skill is to practice it with someone better than you. During the Lighthouse Roadshow I had the unique privilege of attending every European event and listening to every talk, sometimes multiple times. The skills and knowledge of the speakers and professionalism of the event professionals who helped us was simply amazing.

I’m particularly grateful to my fantastic colleagues at Rancher Labs – Lujan Fernandez, Abbie Lightowlers, Olivier Maes, Tolga Fatih Erdem, Jeroen Overmaat, Elimane Prud’ hom, Nick Somasundram, Simon Robinson, Chris Urwin, Sheldon Lo-A-Njoe, Jason Van Brackel, Kyle Rome and Peter Smails. I’ve also been fortunate to work alongside rockstars from partner companies like Steve Giguere, Grace Cheung and Jeff Thorne at Aqua Security; Bas Peters, Richard Erwin and Anne-Christa Strik at GitHub; and Bozena Crnomarkovic Verovic, Dennis Gassen, Shirin Mohammadi, Maxim Salnikov, Sherry List, Drazen Dodik, Tugce Coskun, Anna-Victoria Fear, Juarez Junior and many others from Microsoft; Alex Diaz and Patrick Brennan from Portworx; Carmen Puccio from Amazon; and Dan Gordon from GitLab. I can’t help but feel inspired by all these fantastic people.

By the time we finished in Dublin, I felt invigorated and filled with new ideas. Looking back, I know that listening and sharing with these brilliant folks has encouraged me to step up my own game.

More Resources

What to know more about how to build an enterprise Kubernetes Strategy? Download our eBook.

Tags: ,,,, Category: Products, Rancher Kubernetes Comments closed

Virtualization Management with SUSE Manager

Friday, 15 November, 2019

SUSE® Manager 4 is a best-in-class open source infrastructure management solution that lowers costs, enhances availability and reduces complexity for life-cycle management of Linux systems in large, complex and dynamic IT landscapes. You can use SUSE Manager to configure, deploy and administer thousands of Linux systems running on hypervisors, as containers, on bare metal systems, IoT devices and third-party cloud platforms. SUSE Manager also allows you to manage virtual machines (VMs).

Virtualization is the means by which IT administrators create virtual resources, such as hardware platforms, storage devices, network resources and more. There are quite a few tools that enable the creation of virtual resources (such as Xen and KVM), but what about the management of those tools? That’s where SUSE Manager comes in.

The current iteration of SUSE Manager enables an admin to work with VMs via the following feature set:

● Creating VMs
● Updating and editing the hardware settings for the hosting platform
● Stopping, pausing, and deleting VMs
● Displaying the Virtual Network Computing (VNC) or Spice console via a web-based interface

Why Use VMs?

Before we dive too deeply into SUSE Manager’s Virtualization Management, let’s first pose the question, “Why would an admin choose to create virtual resources, as opposed to the real thing?”

Resource Optimization

Imagine having to deploy a single server for every service you offer. Your data center could wind up with hardware dedicated to various websites, databases, user authentication, network optimization, security and much more. By deploying services on dedicated hardware, chances are those services won’t be making the best usage of those resources. And with the low cost of storage, RAM and CPUs today, your business could do a much better job of optimizing that hardware.

VMs allow you to easily deploy multiple services (even multiple platforms) on a single server, thereby making the most out of the hardware you’ve purchased.

Better Uptime
So you’ve chosen SUSE Enterprise Linux as your operating system of choice. Why? Because it is one of the most rock solid and reliable platforms on the market. But that software doesn’t ensure your hardware will always be up to the task. By employing VMs, fail over and redundancy are made exponentially easier. Should something go wrong with a VM, you can quickly spin up a clone such that no one would know the difference. And with the right tools in place, you can even make this happen automatically. This redundancy makes the quest for 100 percent uptime almost achievable.

Retain Legacy Systems
Your business may still depend upon legacy systems. If that’s the case, what do you do when one of those legacy servers fails and it’s no longer on the market? Deploy a VM, with considerably more power than the original hardware. By employing VMs, those legacy systems can live on long enough so that your business will have time to eventually migrate them to more modern iterations.

Security
When a VM is deployed, it is typically done in such a way as to be isolated from the hosting hardware. And with the help of state recording, transience and mobility, those VMs offer your business added layers of security. Because of this, you are able to isolate the fingerprint of your host OS. And even VM users with admin privileges are not capable of breaching the layer of isolation between the VM and the host. VMs also allow you to easily create isolated networks to make it more challenging for malicious users to access your company data.

With an understanding of what a VM is, where does SUSE Manager fit into this?

Multiple Location Management
Imagine you have multiple locations, each of which will be deploying numerous VMs to cover various areas of business. You could have Locations A, B, C and D that all require a combination of LAN- and WAN-facing websites, database servers, document and print servers, authentication servers and more. How would you manage those, with any level of consistency, if each location’s IT department were tasked with the job?

You wouldn’t.

Currently, SUSE Manager offers an add-on subscription that enables VM management (so long as the server hosting SUSE Manager supports either KVM or Xen). For retail infrastructure, the current iteration of SUSE Manager offers a separate piece that augments the retail branch server with monitoring and VM management. The retail-specific component can work with both the branch server, as well as POS devices.

Or what if you’re a customer with hundreds of locations? In each location, there’s a server that has been deployed to run the exact same virtual managers. With the help of SUSE Manager (and Salt States), you would be able to easily manage that entire estate, or even add new VMs, update existing VMs or delete old VMs.

 

How SUSE Manager Manages VMs

SUSE Manager enables the administrator to manage virtualized clients. In this type of installation, a virtual host is installed on the SUSE Manager Server to manage any number of virtual guests. Both Xen and KVM hosts (and their guests) can be managed directly in SUSE Manager, whereas VMware hosts (including VMware vSphere) first require the setup of a virtual host manager (VHM, Figure 1), which will use the VMware Gatherer module.

Figure 1

Creating a VHM for a VMware host.

 

 

 

 

 

 

 

 

Once your virtual hosts are ready, you can then auto-install hosts and guests using AutoYaST or Kickstart and manage guests in the web UI.

For VMware, including VMware vSphere, SUSE Manager requires you to set up a VHM to control the VMs. This gives you control over the hosts and guests, but in a more limited way than available with Xen and KVM. Unlike creating a VHM for VMware hosts, Xen and KVM virtualization management happens within the system’s Virtualization tab (Figure 2).

Figure 2

The Virtualization tab with SUSE Manager

 

 

 

 

 

 

 

The Virtualization tab only shows up on bare metal machines, after adding the “Virtualization Host” entitlement to the system. This task can be handled either on the activation key level or in the system’s Properties page. Once Virtualization is available, you can (with the click of a button) create a new guest and start, resume, stop, restart, suspend and delete VMs. One other feature, found within the Virtualization tab, is a graphical display of the VM.

It is important to note that the auto-installation of VM guests works only if they are configured as Traditional clients. Salt clients can be created using a template disk image or using Kiwi (an application for making a wide variety of image sets for Linux), but not by using AutoYaST or Kickstart. These image building actions can also be chained with VM creation, within SUSE Manager, using the Action Chains feature.

Customizing Your VMs

Once you have your host and guest up and running, you can then customize the guest to perfectly fit your needs. Say, for instance, you need to setup a MySQL database server. From the System page, click on the Software tab and click Install New Packages. Run a search for mysql, and you’ll see all of the related packages that can be installed (Figure 3).

Figure 3

Installing MySQL packages to a host

 

 

 

 

 

 

 

 

 

With SUSE Manager you can deploy custom VMs that can take on just about any number of tasks. And once those machines have been deployed, you can control and manage them from a single point of entry. The SUSE Manager web UI allows you to configure, provision, audit, upgrade and patch your VMs. From within that same tool, you can create Salt States/Formulas, view events and much more.

Creating with Salt States
Users can also leverage the SUSE Manager scripting possibilities found with Salt States. Using these tools, it is possible to create very specific VMs. A Salt State is a configuration template that allows you to describe what each of your systems should look like, including the applications and services that are installed and running. Imagine, by simply creating/editing a Salt State file, you can even automate the creation of your virtual guests. This, of course, is a feature that requires a full understanding of Salt State files, but once you have a grasp of how they work, there’s no limit to what you can do with VMs and SUSE Manager.

VM Management

The management of your VMs is handled from System List | Virtual Systems. From that tab (Figure 4), click on the host in question to begin working.

Figure 4

Virtual systems from with SUSE Manager

 

 

 

 

 

 

 

 

 

 

 

Some of the more crucial tasks you can undertake on your VMs are:

● Software management – add/remove software, upgrade packages, compare packages, and manage package states
● Customize power management settings
● Join system groups
● Send remote commands (Figure 5)
● Run OpenSCAP audit scans
● Manage States
● Select Salt Formulas to be saved as High States (which can then be applied)
● Manage pending events

Figure 5

Running a command on a VM

 

 

 

 

 

 

 

 

 

The Current Model

Virtualization Management within SUSE Manager should not be confused with a full-blown virtualization solution. Instead, SUSE Manager should be thought of as “just enough virtualization.”

You may have forward-deployed/edge IT or SAP clusters where you want to roll out virtualization for the sole purpose of backing up and restoring workloads (independent of the hardware). In these cases, SUSE Manager makes administering those VMs, via a centralized web-based interface, exponentially easier. This can be key to a successful deployment, where speed and accuracy of control is tantamount to keeping those systems communicating with one another.

Or maybe your company is about to embrace Kubernetes as its container orchestration platform. For that, you might need to manage the Kubernetes cluster nodes’ VMs, where the ability to stop, start, pause and delete the hosting VMs with a high level of efficiency is key to managing that ecosystem.

Either way, SUSE Manager has your VM needs covered.

Find your cloud strategy in London

Wednesday, 6 November, 2019

In a few short weeks’ time, team SUSE will be packing our bags and heading to London’s Docklands for the annual Gartner IT Infrastructure, Operations and Cloud Strategy Summit. If you’ve not heard of or been to this event before, it’s full of infrastructure and operations professionals and executives all looking to hear about the latest ways of accelerating innovation and business agility, taking advantage of hybrid cloud technologies, and evolving security, as well as fitting in some quality networking. It’s a great event, with lots of interesting companies (like SUSE, for example) sponsoring, exhibiting and presenting, as well as many of the brightest and best from Gartner sharing their thoughts and latest research.

What’s your strategy?

One of the topics that is regularly discussed at the Gartner events is around creating and maintaining a cloud strategy. A cloud strategy should be a living document that sets out why you’re using the cloud, what applications you’re using it for, and not just set out which vendor you’ll be using. Just as your business continues to evolve, so should your cloud strategy to accompany it.

Cloud strategies should be a collaborative document – IT should work with the line of business teams to ensure their needs are part of this, to prevent the resurgence (or continued use) of Shadow IT. While Shadow IT has enabled many line of business teams to be able to react quickly to market changes and build revenue pipelines, it comes with a lack of control and governance that can be both dangerous and expensive. IT can and should be a service provider and broker to the whole business, delivering the services that are required, while ensuring they meet and maintain the relevant security and compliance standards.

I will be presenting alongside my colleague Imogen on Tuesday 26th at 12:05 about this topic, where Imogen will be sharing some of her expert thoughts on the evolving role of the IT team, so we look forward to seeing many of you there.

If you’re at the summit, come along to the SUSE booth to hear more about this, to meet the team and to find out more about how open source solutions from SUSE can help your IT department to be that service provider and broker that your business needs.

Webinar: Boost Developer Productivity with SUSE Cloud Application Platform

Friday, 25 October, 2019

Last week, Troy Topnik and I presented a webinar on how to boost developer productivity with SUSE Cloud Application Platform — our modern application delivery platform that brings an advanced cloud native developer experience to Kubernetes and enables fast and efficient delivery of cloud native applications at scale. Developers can serve themselves and get apps to the cloud in minutes instead of weeks. SUSE Cloud Application platform eliminates manual IT configuration and helps accelerate innovation by getting applications to market faster. Streamlining application delivery opens a clear path to increased business agility, led by enterprise development, operations, and DevOps teams.

Boost Developer Productivity

Boost Developer Productivity with SUSE Cloud Application Platform

We discussed and demonstrated how the platform builds on Kubernetes to add easy one step deployment of cloud native applications from the CLI, UI, or Helm Chart Repository using a variety of languages, frameworks, and services most appropriate for the task. Other topics included increasing operational efficiency, installation options on a variety of Kubernetes services, application autoscaling, logging, metrics, and much more.

You can watch a recording of the webinar here. I hope you enjoy it!

How SUSE Certification can help You and Your Organization

Tuesday, 8 October, 2019

In a world of digitization, open-source technologies are the driving force. Over the years, Linux has been the preferred Operating System for most of the enterprises, irrespective of the size of the business, due to its proven security and stability. You may not know but your favorite coffee shop or your everyday supermarket may be running a Linux machine at the backend. Adoption has grown immensely during the past few years and hiring open source talent is a priority for 83% of hiring managers. Undoubtedly, one of the most demanding skill categories is Linux. We tend to hear words like Docker, Kubernetes, Ceph, etc… and they are all built on top of Linux. Due to these reasons, Hiring Managers are opting to train their existing employees on Linux and on various open source technologies.

 

SUSE has been in the technology industry for more than 25 years and Linux is the core in everything we do. We build enterprise-grade Linux for different hardware architectures, virtualization platforms and even for public clouds. In order to meet more demanding technologies and adoption models, our core has evolved from monolithic to Modular based and our portfolio is evolving from Openstack based cloud solutions to Kubernetes based Container platforms. SUSE has always been adapting to industry needs and technological innovations. This means organizations need a way to learn and adapt these technologies to meet their business demands and to fulfill their IT digital transformation goals. Best way to achieve it through SUSE Training and Certification program.

Delivered by a SUSE Certified Instructor (SCI), the program can maximize the IT team’s skill & knowledge and also build confidence to face the evolving technology challenges. SUSE Certification program covers the entire technology portfolio using real-world content and keeps it up-to-date so that the organizations can drive their businesses with the latest and greatest in technology. From Linux system administration to emerging technologies like cloud, containers & storage, SUSE Certifications can boost the bottom line and help reduce unplanned downtimes.

Organizations see more return-on-investment with SUSE Training and Certification program as their staff can now act faster with better helpdesk responses and outage resolutions. It is proven that SUSE Certified Professionals have increased the overall productivity of their workplace and adds value to their individual career growth.

For more information, reach out to our SUSE Training Partners or your local SUSE Partner or simply contact SUSE. We adapt and you succeed.

Windows Containers and Rancher 2.3

Tuesday, 8 October, 2019

Container technology is transforming the face of business and application development. 70% of on-premises workloads today are running on the Windows Server operating system and enterprise customers are looking to modernize these workloads and make use of containers.

We have introduced support for Windows Containers in Windows Server 2016 and graduated support for Windows Server worker nodes in Kubernetes 1.14 clusters. With Windows Server 2019 we have expanded support in Kubernetes 1.16.

For our customers one of the preferred ways to increase the adoption of containers and Kubernetes is to work to make it easier for operators to deploy it and for developers to use it.

Towards that end Microsoft has invested in AKS and Windows Container support with this goal in mind while working with partners such as Rancher Labs who has built their organization on the principle of “Run Kubernetes Everywhere”.

With the release of Rancher 2.3, Rancher is the first to have graduated Windows support to GA and can now deploy Kubernetes clusters with Windows support from within the user experience.

Using Rancher 2.3 users can deploy Windows Kubernetes clusters in AKS, Azure Cloud, any other cloud computing provider or on-premises using the supported and proven network components in Windows Server as well as Kubernetes.

Rancher 2.3 will support Flannel as the CNI plugin and Overlay Networking with VxLAN to enable communication between Windows and Linux containers, services, and applications.

Learn more about Rancher 2.3 and its functionality.

Tags: , Category: Containers Comments closed

Introducing Rancher 2.3: The Best Gets Better

Tuesday, 8 October, 2019

Today we are excited to announce the general availability of Rancher 2.3,
the latest version of our flagship product. Rancher, already the
industry’s most widely adopted Kubernetes management platform, adds
major new features with v2.3, including:

  • Industry’s first generally available support for Windows containers, bringing the benefits of Kubernetes to Windows Server applications.
  • Introduction of cluster templates for secure, consistent deployment of clusters in large scale deployments
  • Simplified installation and configuration of Istio service mesh

These new capabilities strengthen our Run Kubernetes Everywhere strategy
by enabling an even broader range of enterprises to leverage the
transformative power of Kubernetes.

Bringing the Benefits of Kubernetes to Windows Server Applications

Today, 70% of on-premises workloads are running on the Windows Server
operating system, and in March of this year, Windows Server Container
support was built into the release of Kubernetes v1.14

Not surprisingly, Windows containers have been one of the most desired technologies within the Kubernetes ecosystem in recent years. We are proud to be partnering with Microsoft on this launch and are excited to be the first Kubernetes management platform to deliver GA support for Windows Containers and Kubernetes with Windows worker nodes! To get Microsoft’s perspective on Rancher 2.3, check out this blog from Mike Kostersitz, Principal Program Manager at Microsoft.

By bringing all the benefits of Kubernetes to Windows, Rancher 2.3 eases
complexity and provides a fast and straightforward path for modernizing
legacy Windows-based applications, regardless of whether they will run
on-premises or in a multi-cloud environment. Alternatively, Rancher 2.3
can eliminate the need to go through the process of rewriting
applications by containerizing and transforming them into efficient,
secure and portable multi-cloud applications.

Windows Workloads

Secure, Consistent Deployment of Kubernetes Clusters with Cluster Templates

With most businesses managing multiple clusters at any one time,
security is a key priority for all organizations. Cluster templates help
organizations reduce risk by enabling them to enforce consistent cluster
configurations across their entire infrastructure. Specifically, with
cluster templates:

  • Operators can create, save, and confidently reuse well-tested Kubernetes configurations across all their cluster deployments.
  • Administrators can enable configuration enforcement, thereby eliminating configuration drift or improper misconfigurations which, left unchecked, can introduce security risks as more clusters are created.

Cluster Templates

Additionally, admins can scan existing Kubernetes clusters using industry tools like CIS and NIST to identify and report on unsecure cluster settings in order to facilitate a plan for remediation.

Tighter Integration with the Leading Service Mesh Solution

A big part of Rancher’s value is its rich ecosystem catalogue of
Kubernetes services, including service mesh. Istio, the leading service
mesh, eliminates the need for developers to write specific code to enable
key Kubernetes capabilities like fault tolerance, canary rollouts,
A/B testing, monitoring and metrics, tracing and observability, and
authentication and authorization.

Rancher 2.3 delivers simplified installation and configuration of
Istio including:

  • Kiali dashboards for traffic and telemetry visualization
  • Jaeger for tracing
  • Prometheus and Grafana for observability

Istio

Rancher 2.3 also introduces support for Kubernetes v1.15.x and Docker
19.03. Getting started with Rancher v2.3 is easy. See our documentation for instructions on how to be up and running in a flash.

Our Momentum Continues

Rancher 2.3 is just the latest proof point of our momentum in 2019.
Other highlights include:

  • 161 percent year-on-year revenue growth, community growth to more than 30,000 active users, oftware downloads have surpassed 100M.
  • Rancher was named a leader in Forrester New WaveTM , Enterprise Container Platform Software Suites
  • Rancher is included in Five Gartner Hype Cycles in 2019
  • Rancher was recognized by 451 Research as a Firestarter in Q3’19

And, maybe the best part of the story is that we have more exciting news coming very soon! Stay tuned to our blog to learn more.

We also look forward to seeing everyone at KubeCon 2019 in San Diego, California. Come to booth P19 to talk with us or get a personalized demo.

Tags: , Category: Uncategorized Comments closed

Engaging with SUSE Support: Severity Levels, Response Times, and After Hours

Friday, 4 October, 2019

SUSE Delivers Exceptional Customer Support

When you purchase a SUSE solution, you know that solution is backed by SUSE Support.  A significant part of the “backed by SUSE Support” statement means that your solution has a defined lifecycle, is hardened and secured for your business critical systems, and maintenance is provided in the way of patches and security updates.  The other part of “backed by SUSE Support” also means is that your solutions are backed by an experienced team who treats you like family.

But just like a family has rules of engagement, SUSE Support has service level agreements (SLA).  In this blog post, I’ll talk about some of them including how to determine your severity level, what our response times are (and what a response means), and after-hours support.

Severity Levels

Every system problem feels like the most important system problem.  But did you know that SUSE Support has specific definitions for Severity 1 issues versus Severity 4 issue?  Use the table below to categorize your issues.

Severity Level Description
Severity 1 (Critical) The operation is in production and is mission critical to the business. The product is inoperable and the situation is resulting in a total disruption of work. There is no workaround available.
Severity 2 (High) Operations are severely restricted. Important features are unavailable, although work can continue in a limited fashion. A workaround is available.
Severity 3 (Medium) The product does not work as designed resulting in a minor loss of usage.
Severity 4 (Low) There is no loss of service. This may be a request for documentation, general information, product enhancement request, etc.

 

 

Response Times

Now that we know the severity level of our concern, let’s take a look at the SLA we have as far as response time.  The SLA is dependent on the type of subscription you purchased – Standard or Priority.  As a reminder, a standard subsciption provides 12×5 coverage; while a priority subscription gives you 24×7 coverage. SUSE defines a response time as the time between creation of the incident and the initial communication between the assigned engineer and your company.

Response times for Standard Subscriptions are:

Severity Hours of Coverage Target Response Time  
4 12×5* Next Business Day
3 12×5 Next Business Day
2 12×5 4 Hour
1 12×5 2 Hour

 

Response times for Priority Subscriptions are:

Severity Hours of Coverage Target Response Time  
4 24×7 Next Business Day
3 24×7 4 Hour
2 24×7 2 Hour
1 24×7 1 Hour

 

*The target response time applies to the period when support is available. For example, a Standard subscription Severity 1 incident logged at 6 p.m. will have a target response time of before 10 a.m. the following business day.

Note:  For Severity 1 Issues: open an Incident through the Customer Center as a Severity 2, then call your local support center to escalate to a Severity 1.

For even faster response times and direct access to a named engineer, we invite you to learn about Premium Support Services.

After Hours Support

This is all fine when an issue happens during the day, right?  But what happens when the problem occurs outside of business hours?

If you have a Priority Subscription, you are covered! SUSE technical support is available 24×7 no matter the severity level.  You can rest assured that SUSE will send you a technical acknowledgement of the case within the specified target response time for that severity.

IMPORTANT:  If you are experiencing a Severity 1 (Critical) issue outside business hours and have access to 24-hour coverage, please contact Technical Support via phone so that we can resolve the issue for you.

As a side note, if you have a Premium Support Services offering, you might also be entitled to Scheduled Standby.  Prearrange Scheduled Standby when you know you will need support outside business hours. This service gives you direct access to a SUSE engineer on a standby basis. When you have a scheduled product upgrade or network maintenance task for which you’d like some added insurance, you can have an experienced support engineer standing by. If you have the Gold or Platinum levels of Premium Support Services, you may schedule Scheduled Standby directly with your Service Delivery Manager. Make sure you schedule Scheduled Standby a minimum of two weeks out from the time you need it.

All this and more is covered in detail in the SUSE Technical Support Handbook and in the Support FAQs.  I also published a post earlier this month on my favorite tech support resources.

As part of your family, we want to make sure we are giving you the very best support we can; the support that you depend on. We encourage you to reach out if you have suggestions or concerns.

Lunar Vacation Planning

Monday, 16 September, 2019

The moon’s surface is not exactly a vacation spot – with no atmosphere, a gloomy gray landscape, average temperatures around -300 degrees Fahrenheit, and a lengthy 5-day, 250,000-mile commute from Earth.  Yet, being able to use our Moon as a stepping-stone towards Mars and beyond is essential and the research we can accomplish on the lunar surface can be invaluable.  In 3.5 billion years, our sun will shine almost 40% brighter which will boil the Earth’s oceans, melt the ice caps, and strip all of the moisture from our atmosphere. Long before that happens though, climate change will likely choke our planet, or we could get pummeled by asteroids, or even swallowed by a black hole.

Meanwhile, whatever we can do to advance science through space and lunar exploration is crucial to our understanding and eventual survival.  The NASA Artemis lunar exploration program looks to establish sustainable missions by 2028.  Not only will Artemis demonstrate new technologies, capabilities and business approaches needed for future exploration, it will inspire a new generation and encourage careers in STEM (Science, Technology, Engineering and Mathematics).

HPE, one of SUSE’s most important partners in High-Performance Computing and the advancement of science and technology, is now building NASA’s new supercomputer named “Aitken” to support Artemis and future human missions to the moon.  HPE’s “Aitken” supercomputer will be built at NASA’s Ames Research Center and will run SUSE Linux Enterprise HPC (co-located where the Pleiades supercomputer – also SUSE-based – has been advancing research for several years).   Aitken will run extremely complex simulations for entry, descent and landing on the moon as part of the Artemis program.  The missions include landing the next humans on the lunar south polar region by 2024 (on the rim of the Shackleton crater, which experiences constant indirect sunlight for a toasty -300 degrees Fahrenheit).

Following HPE’s success with its Spaceborne Computer on the International Space Station, HPE Computing Solutions brings it back down to Earth and makes supercomputing more accessible and affordable for organizations and industries of all sizes.  SUSE is excited to work alongside HPE in helping to solve the world’s most complex problems.  SUSE HPC solutions can provide the software platform needed to run new-wave workloads that include complex simulations, machine learning, advanced analytics and more.  SUSE HPC solutions include the operating system along with many popular HPC management and monitoring tools, libraries, Ceph-based storage for both primary and second-tier software-defined storage, and cloud images for HPC bursting and building HPC in the cloud.

To learn more, start with our HPC information page at https://www.suse.com/programs/high-performance-computing/ and check out our SUSE HPC product page at https://www.suse.com/products/server/hpc/ .

Thanks for reading!

Jeff Reser  @JeffReser