Webinar – Multimodal OS: SUSE Linux Enterprise 15 Service Pack 1

Wednesday, 10 July, 2019

Hear from the experts themselves.

If you haven’t already, tune in to our global webinar tomorrow at 7 am PST | 4 pm CET to hear from the SUSE Linux Enterprise product experts, Kai Dupke and Frederic Crozat. You’ll have the opportunity to get answers to your questions and learn more about SUSE Linux Enterprise 15 Service Pack 1, the newest release in multimodal OS.

Webinar Details

 

About Service Pack 1

SUSE Linux Enterprise 15 Service Pack 1 (SP1) recently launched a couple of weeks ago. Service Pack 1 is the second release based on the SUSE Linux Enterprise 15 platform and a full-featured operating system allowing an enterprise to run a variety of workloads based on the multimodal platform of SUSE Linux Enterprise 15.

SUSE’s modern and modular OS helps simplify multimodal IT, makes traditional IT infrastructure efficient, and provides an engaging platform for developers. As a result, you can easily deploy and transition business-critical workloads across on-premise and public cloud environments.

Register Now!

Hooray for HA!

Wednesday, 10 July, 2019

Are you an SAP architect? Want to save 80 minutes and ensure more than six 9s of availability on your SAP HANA system? Then wait no more, watch this 5-minute video and walk away happy, 🙂

For the rest of you that needs more convincing, let me do a little rewind.

In 2010, SAP raised the bar on enterprise-class data management and analytics and introduced SAP HANA — the world’s first data platform to deliver real-time analytics on live transactions. Now, 9 years later, SAP HANA is being used by more than 28,000 customers around the globe. And, SAP continues to position HANA as the core pillar of all of its future transaction, analytic and machine learning platforms. As Gerrit Kazmaier, senior vice president of SAP HANA and Analytics at SAP says – “SAP HANA is the heart and soul of SAP”.

If you are a customer, SAP is also the heart and soul of your business. And what do you not want to see happen to your heart, any heart? You don’t want it to go down. SAP HANA database is the foundation for your most mission critical applications and it is super important that these systems are running and available to users always. This requires that these systems can make faster recovery after system component failure (High Availability) or after a disaster (Disaster Recovery). This should happen without any data loss and with very very short recovery time. As the #1 Linux OS for SAP and SAP HANA, SUSE Linux Enterprise Server for SAP Applications does just that, with built in capabilities for system replication (replicating all data to a secondary SAP HANA system constantly) and high availability.

But in order to set it up, it might take an experienced SAP administrator 60-90 minutes, through a manual install process. Or, you could use the YaST module in SUSE Linux Enterprise Server for SAP Applications  to automate the process and be done in 10 minutes or less! YaST stands for Yet another Setup Tool (yes I know, those darn engineers sure have clever names for everything! :P), and is the installation and configuration tool for SUSE’s Linux distributions. It is both an extremely flexible installer and a powerful control center. You can think of it as an all-purpose tool for single system Linux administration. In this case, it’ll be a perfect time saver for you – Mr. Busy IT Admin. So what are you waiting for? Go ahead and roll the video tutorial!

Thanks to Dell EMC for lending us the PowerEdge R940 servers that we used for creating this video. With Dell and SUSE, you have a proven combination for SAP that offers the perfect balance of performance, reliability, flexibility, scalability and great value for your enterprise dollars.

Cloud Application Platform vs Container as a Service vs VM hosted application

Tuesday, 9 July, 2019

As a solution architect in SUSE Global Services, I often hear the following question:

What is the main difference among?

  • Hosting an application on a cloud platform
  • Hosting an application on a container as a service platform
  • Building a simple virtual machine and hosting an application on it

(And, yes, most people describe virtual machines as simple, coming from the fact that they are the original and traditional replacement of having physical servers host applications.)

In this blog post, I will attempt to dispel the confusion among these approaches and provide a roadmap for using them for application transformation.  But before we do that, let’s delve into some history.

In the “old days,” applications were always hosted in a traditional way on a physical server or a group of physical servers. However, physical servers are expensive, hard to maintain and hard to grow and scale.  That’s when virtual machines (VM) grew in popularity.  VMs provided a better way to maintain, grow and scale.  That is, they were easier to backup and restore and migrate from one region to another and they were easier to replicate across multiple domains/zones/regions.

This table is a simple high-level comparison between physical servers and virtual machines or servers:

Point of Comparison Physical servers Virtual Machine
Scalability Hard to scale, especially vertical scaling. Hardware gets old and has limited memory, storage and CPU.

For horizontal scaling, we need to buy hardware and ship it to the hosting datacentre and to ensure it was compliant with the datacentre support regulations.

Easy to scale vertically. We can add new virtual machines as necessary.  However, we still have the same horizontal scaling limitations as physical servers.
Maintenance Hardware gets old and maintenance gets expensive and hard. At a certain point, it must be replaced.  When the hardware is replaced, all hosted data and application must be migrated and tested on the new hardware. Maintenance is better especially from a hardware perspective.  It is the responsibility of the virtual machine provider (i.e., a cloud provider such as AWS or Azure or hosted virtualized environment such as VMWare).
Cost Very expensive. It is not only about the hardware but as well the facilities needed to operate such hardware, such as the electricity and the cooling of the server. Much less expensive.
Performance Better performance as the full power of the physical hardware is dedicated to the application.  That is, the application receives the full power of the underlying hardware. It is not as good as the physical server’s capacity.  Network bandwidth and all underlying hardware is shared between all running VMs.
Footprint Large footprint. Smaller footprint.  The hardware is not owned by the application but rented to the application.
Security You are in control and charge of hardware and network so advanced security policies can be implemented, for example physical isolation of the network packets. Security in the virtualized environments is not as much of a challenge as many people think.  However, it is limited as you cannot physically isolate the network, the data communication and storage. You still can implement security on the data and the routing of it. One important thing to keep in mind for VMs is that each VM has its own isolated OS Kernel so the runtime is not shared making  it secured.
Portability Is not portable at all. Highly portable, for example a VM can be migrated as a snapshot from one VM provider to another, as well as from one DC to another, it is very easy.

 

So with the pros outweighing the cons, the trend becomes to implement VMs.  You simply setup a group of VMs and host the application on it.  Simple, right? Not so fast!  Developers and testers are not using the same setup as production because the licenses for VMs are not free.  So, in the end, configurations and the setup of the environments are not the same. Additionally, the time and effort to configure and install the software on a VM is not small.

 The World of Containers

Enter containers.  Containers came into the picture and started fixing all those issues so now the developer defines the image he/she wants to place the application on and hands it over to the operator.  It is the same image used in testing the application and the same image used in production. So containers solve our problem.  They enable consistency.  But what about the cost, effort, security and all other aspects, let us do a quick comparison between them.

Here’s our comparison table between VMs and containers:

Point of Comparison Virtual Machine Container
Scalability Easy to scale horizontally but have some limitation on the vertical scaling.

Scalability is expensive because in the public cloud we have to pay for the cost of the VM, while in private or ground we only need to purchase hardware.

Easy to scale horizontally; no need to scale vertically. Scalability is not expensive.  The footprint is small and the containers are sharing the OS runtime.
Maintenance Maintenance is harder as the IT team must maintain the software running on the machine, the operating system, and install patches.  They must also ensure compatibility. Maintenance is very simple. The owner of the image is the one responsible for maintaining it. As the image is a light component (that is, it is not a real operating system), maintenance is much easier and efficient.
Cost Much more expensive than a container. You have to pay for the underlying operating system and its support and patches, as well as the underlying renting cost for the VM. Almost zero cost, depending on the software used by the image. No cost of the operating system as it is very light. You are only paying tor licenses and support for the software installed above the operating system. Licenses in this case are much cheaper than the VM because most of the software licensing models are based on VCores/Cores — allowing you to host a number containers above it.
Performance Better performance as no kernel is shared.  Each VM has its own operating system and its own kernel. It is very good even though the same kernel is shared in the hosting environment (whether it is a VM or a Bare Metal machine). The main architecture principle of the containerization is that you build a container for the smallest unbreakable component/module in your application. It is not a must to be microservices (MSA) because the aim is to have the ability to make deployments:

  • Consistent
  • Simpler
  • Repeatable
  • Easy to scale when needed in a cost efficient manner.
Footprint Larger footprint. Extremely small footprint. It only hosts what the MSA or the application needs and nothing else.
Security Better security because no sharing occurs in the kernel; each VM has its own operating system and its own isolated kernel.

 

 

Security can be a challenge when using containers given that containers are sharing the same kernel.  With containers, you cannot physically isolate the network and the data communication and storage.  However, you can still implement security on the data and the routing of it.

 

Potentially the community has started working on building a lightweight VM with a very small footprint like a container but with its isolated runtime and kernel, an example to that is KataContainers.

 

VMs vs Containers

VMs vs Containers

CaasP

CaasP

Comparing VMs to containers makes containers seem pretty good, right?  So now most customers start building small containers rather than building VMs.  They are more efficient cost and of high flexibility and it increases the quality of their in-house applications.

But do containers really solve it all with runtime engine such docker or crio, in conjunction with an orchestrator such as Kubernetes (K8s)?  Well, like most things, the answer is, it depends on both the needs and the requirements.

And Along Came Cloud Application Platform

First, let’s talk about the cloud application platform and see a comparison between it and a container as a service.

What is a cloud application platform? The simple answer is that it is a highly advanced PaaS (platform as a service).  Ok, you’re thinking, a container as a service is a PaaS.  So, how is it different and what is meant by an advanced PaaS?

A cloud application platform offers both a runtime to run an application in an environment where clients don’t care about the runtime hosting the application- – as long as it complies with a set of rules and regulations.

This might seem a bit confusing, so let’s show an example.  Let’s assume we have a Java web application running a spring boot application.  What are the solutions offered by both platforms and what is RACI of hosting.

Aspect Cloud Application Platform Container as a Service
What is needed to host the application
  • Create a manifest or a build a file to set all the dependencies and all rules and regulations the platform needs to prepare for the application. In our case, it will host the Java runtime version required and all the services linked to the application for example a used database
  • Push the application for deployment using a CD/CI pipeline or a simple command.
  • Create the image having the Java runtime and all the dependencies
  • Create a pod (if it is running on K8s) resembling the deployments
  • Define the exposed ports and network setup
  • After building all the runtime images/containers/pods, use the orchestrator engine to push it to container as a service using CD/CI or simple CLI command
What the platform does
  • Create a version for the application
  • Build the application runtime. In this case, it would be a war. Prepare the required runtime based on the configurations in the manifest file.
  • Define all the environment vars required for the application including the backend services.
  • Host the application and allow other to communicate with it either using routes or the API Gateway.
  • Maintain the application instances and scale out or in when needed.
  • Create the runtime instance for the developed files describing the target runtime, networking, dependencies, storage and the runtime containers and pods.
  • Maintain the running instances and scale out or in when needed.
Responsibilities
  • The Application owner is only responsible for the application code and focusing on its business.
  • The Platform owner is responsible for maintaining the runtime and the services offered by the platform such as upgrade of a java runtime. It also maintain and takes care for licences required for the application runtime and consumed services.

 

An important added value is that the application owner doesn’t really care about the hosting runtime as long as it complies with the application regulation.  For example if it is a web application, the application code and owner doesn’t know what web server is running.

The Application owner is fulling responsible for:

  • Images and the maintenance of it – including the installed software as well as the licences
  • All deployment scripts to the platform

 

The platform owner is responsible for maintaining the container and orchestrator runtime and the services offered by the platform — such as logging and monitoring.

CAP

CAP

CaasP vs CAP

CaasP vs CAP

Simply put, a cloud application platform is more of a complete PaaS which hosts and supports application runtime. A container as a service is a container runtime and orchestrator which helps client push their images to run on.

You Have Choices

So when do you choose a cloud application platform vs a container as a service?  Here are some tips:

  • Go toward cloud application platform if you are building a cloud native application as it will leverage the cloud awareness and cloud native design principles
  • Go towards cloud application platform if you don’t care about the underlying runtime. That is, you don’t care if the application is running on Apache HTTP server or NGINX web server.
  • Go towards cloud application platform if you want to enforce 12-factor principles and cloud application development principles
  • Go toward container as a service if you want to define your software running the application and its installation setup.
  • Go towards container as a service if you want to have better control on the network setup.

Luckily for you, SUSE offers both!

  • Container as a service (SUSE CaaS Platform) solution, based on the market leading K8s and supports both docker and cri-o container runtime.
  • Cloud application platform (SUSE CAP) solution, based on one of the powerful open source cloud application development platform Cloud Foundry supporting multiple mode of deployments and integration with private cloud, public clouds and ground runtime.

SUSE also offers a variety of professional services offerings to help you on your journey.  From confirmation and validation to full blown design, implementation and premium support services, our consultants are ready to be your trusted partners.  Learn more at suse.com/services.

 

Announcing Preview Support for Istio

Thursday, 20 June, 2019

 

Today we are announcing support for Istio with Rancher 2.3 in Preview mode.

Why Istio?

Istio, and service mesh generally, has developed a huge amount of excitement
in the Kubernetes ecosystem. Istio promises to add fault tolerance, canary rollouts, A/B testing, monitoring
and metrics, tracing and observability, and authentication and authorization, eliminating the need for
developers to instrument or write specific code to enable these capabilities. In effect, developers can just
focus on their business logic and leave the rest to Kubernetes and Istio.

The claims above aren’t new. About 10 years ago, PaaS vendors made exactly the same claim and even delivered
on it to an extent. The problem was that their offerings required specific languages, frameworks, and, for
the most part, only worked with very simple applications. The workloads were also tied to the vendor’s
unique implementation, which meant that if you wanted your applications to use the PaaS services, you were
potentially locked-in for a very long time.

With containers and Kubernetes, these limitations are virtually nonexistent. As long as you can containerize
your application, Kubernetes can run it for you.

How Istio Works in Rancher 2.3 Preview 2

Our users count on us to make managing and operating Kubernetes and related tools and technologies easy,
without locking them in to a specific cloud vendor. With Istio, we take the same approach.

In this Preview mode, we provide users with a simple UI to enable Istio under the Tools menu. Reasonable
default configurations are provided but can be changed as required:

Announcing Istio

In order to monitor your traffic, Istio needs to inject an Envoy sidecar. In Rancher 2.3 Preview, users can
enable automatic sidecar injection for each namespace. Once this option is selected, Rancher will inject the
sidecar container into each workload:

Announcing Istio

Rancher’s simplified installation and configuration of Istio comes with a built-in, supported Kiali dashboard for traffic and telemetry visualization, Jaeger for tracing, and even its own Prometheus and Grafana (separate
instances than the ones used for Advanced Monitoring).

After you deploy workloads in the namespaces with automatic sidecar injection enabled, head over to the Istio
menu entry and observe the traffic as it flows across your microservice applications:

Announcing Istio

Clicking on Kiali, Jaeger, Prometheus, or Grafana will take you to the respective UI of each tool, where you
can find more details and options:

Announcing Istio

As mentioned earlier, the power of Istio is its ability to bring features like fault tolerance, circuit
breaking, canary deployment, and more to your services. To enable these, you will need to develop and apply
the appropriate YAML files. Istio is not supported for Windows workloads yet, so it should not be enabled in
Windows clusters.

Conclusion

Istio is one of the most talked about and requested features in the Rancher and Kubernetes communities today.
However, there are also a lot of questions around the best way to deploy and manage it. With Rancher 2.3.0
Preview 2, our goal is to make this journey quick and easy.

For release notes and installation steps, please visit
https://github.com/rancher/rancher/releases/tag/v2.3.0-alpha5

Developers Need More Than Just Kubernetes

Tuesday, 28 May, 2019

Kubernetes is not enough

 

IT has been transforming amazingly quickly for the past few years, particularly with the rise of Docker and containers in general. As businesses begin to modernize their IT infrastructure and re-architect existing (or create new) applications using microservices, they are turning to containers, which are much smaller and more portable that virtual machines. As container usage grows, organizations need a way to manage them. Kubernetes is certainly by far the most popular software for orchestrating container usage.

It’s truly incredible to witness the rise of Kubernetes and to see the Kubecon conference continue to grow in popularity. SUSE is committed to Kubernetes and we even have a distribution of our own called SUSE CaaS Platform. But, while Kubernetes is the dominant container management platform for operators, it could be seen as a constraint on developers. Kubernetes simply doesn’t:

  • Offer tooling to help with the building, packaging, and deployment of containerized applications as application sets
  • Automatically bind applications to required services
  • Manage the lifecycle of an application by assigning appropriate resources, managing routing, load balancing, scaling up and down, etc.
  • Boost developers’ productivity
  • Actively facilitate choice of language, framework, or services most appropriate for a particular application

 

Aside from the technical details above, both developers and IT face cultural and environment challenges as well:

  • Manual and bespoke processes with little automation or standardization
  • Siloed responsibilities that inhibit productive workflow between developers and operations
  • Inconsistencies between environments across the delivery pipeline (dev/test/stage/prod), that must be reconciled or otherwise addressed, often at great cost
  • Rigid, monolithic application architectures that inhibit rapid release cycles and experimentation and limit re-usability of common code
  • Limited flexibility in terms of choice of languages and frameworks that developers can use

 

The open source Cloud Foundry project does offer all of those things for developers. It is both a technical solution to the first set of problems and, because of its nature as a prescriptive platform, can help to solve the second set of cultural/environmental challenges too. For those who don’t know, the Cloud Foundry Application Runtime (CFAR) is a code-centric platform that simplifies the life of developers. It takes your code, written in any language or framework, and runs it on any cloud. CFAR:

  • Includes a one step command (cf push) to containerize, deploy, and manage an application
  • Automatically identifies and pulls in language libraries and frameworks via buildpacks
  • Includes open source service brokers that automatically create and bind services to applications
  • Automates application lifecycle management by assigning appropriate resources, managing routing, load balancing, scaling, and more

 

SUSE is a longtime supporter of Cloud Foundry and contributes to the project and its Foundation in many ways. We’re a platinum member of the Cloud Foundry Foundation and offer a certified distribution called SUSE Cloud Application Platform.

Some organizations may be reticent to adopt yet another platform to run alongside Kubernetes, particularly when Cloud Foundry is historically VM-based and entirely separate from Kubernetes. That’s why SUSE developed Project Quarks (formerly known as CF Containerization). Project Quarks:

  • Packages CFAR as containers instead of virtual machines, allowing CFAR to be deployed to Kubernetes
  • Originated at SUSE; SUSE is project lead
  • Eliminates the traditional CFAR requirement for BOSH
  • Allows your organization to standardize on Kubernetes as your hosting platform
  • Has a much smaller CFAR footprint than VM based distribution (Min 32 GB vs 128 GB)
  • Is easier to start small and scale up without a huge up-front commitment

 

SUSE continues to help move Cloud Foundry forward and bring the best of it to a native Kubernetes infrastructure. In addition to Project Quarks, we are heavily involved in other related Cloud Foundry projects, including Project Eirini and Stratos.

Project Eirni:

  • Enables pluggable scheduling for CFAR (allows operators to choose whether CFAR should use Diego or Kubernetes to orchestrate application container instances)
  • Runs Cloud Foundry applications natively in Kubernetes
  • Provides the same Cloud Foundry developer experience as other CFAR distributions
  • Provides a familiar Kubernetes operator experience

 

SUSE announced forthcoming support for Eirini at CF Summit EU in October 2018 and, with SUSE Cloud Application Platform 1.4 in April, shipped the first Cloud Foundry software distribution to support Eirini.

Last, but not least, is Stratos, a web-based UI for managing Cloud Foundry. Stratos:

  • Allows users and administrators to manage applications running in the Cloud Foundry cluster and perform cluster management tasks
  • Originated at SUSE; SUSE is project lead
  • Provides a single pane of glass dashboard and metrics
  • Aggregates multiple CF instances: SUSE Cloud Application Platform, PCF, IBM CF Enterprise, open source CF, or any other
  • Provides a view and metrics of the underlying Kubernetes infrastructure
  • Allows for customized branding and styling
  • Is extensible for adding additional integrations

 

With Project Quarks, Project Eirini, and Stratos, SUSE is helping to remake Cloud Foundry into a developer productivity layer that runs inside your existing Kubernetes infrastructure. We hope this helps make Cloud Foundry less intimidating to get started with and easier to use and manage. It’s all about providing the proven Cloud Foundry developer productivity enhancements on top of the de facto container management platform standard of Kubernetes.

SUSE Support Treats You “Like Family”

Wednesday, 15 May, 2019

SUSE Support Treats You Like FamilyIf you’ve spent any amount of time watching television in the US in the past few years, no doubt you’ve seen the advertisements for The Olive Garden, boasting “When you’re here, you’re family!” But when was the last time you felt like that when calling on technical support?

The Truth About Support

We’ve all been there, right? You have an issue and you need to call customer service.  It could be a damaged package or a an issue with your IT infrustructure.  Honestly, did that experience make you feel like a valued family member?

Customer service is in a sorry state; just look at these stats:

  • Nearly 90 percent of Americans have dealt with customer service for one reason or another during the past year, according to a recent survey by Consumer Reports National Research Center, and the experience is often frustrating.
  • A recent study by McKinsey shows that among B2B decision makers, lack of speed in customer service is their number one pain point.
  • Forrester reports that the majority of adults (73%) feel that valuing their time is the most important thing a company can do.

 

Beyond just trying to reach someone who can help resolve the problem, another documented complaint is the attitude of IT support personnel. Who wants to feel like their rep is rolling their eyes at them.   Whatever happened to the old stand-by “there is no such thing as a stupid question?

Today’s digital world requires white glove service where the customer is king.  Our digital world provides lots of options for our customers. At SUSE, we know that our customer support is just one way we can not only keep our customers, but turn them into advocates.

SUSE Support:  We Treat You Like Family

Now consider these facts taken from the survey at SUSECON ’19:

  • 65% bought a SUSE solution because of the support
  • 80% said that having SUSE support increased their confidence in using open source solutions
  • 57% said that SUSE support consistently exceeds their expectations

 

We pride ourself on treating every customer like family. At SUSE – our support engineers care about your success and are with you every step of the way from logging a new incident to problem resolution – just like family. We are transparent, proactive, and will communicate with you openly and honestly until you are satisfied with your resolution. And in our recent survey, transparency ranked as the number two most important factor – just behind maintenance and security patches.

“Backed by SUSE Support” means your business is secure knowing that it will always have a relationship with a SUSE team that provides business value and customer satisfaction. Choose the right open source solution for your business—choose a solution backed by SUSE support.

Learn more here.

Cloudy with a chance of chameleons

Wednesday, 8 May, 2019

Last week, Denver hosted the very first Open Infrastructure Summit. Over 2,000 attendees from all around the world visited the Colorado Convention Centre with an express aim to collaborate, network, share and learn (and maybe pick up some swag from the sponsors!). The infamously unstable Colorado weather lived up to its reputation, welcoming us at the weekend with warm sunshine and blue skies, and then rapidly dropping temperatures with snow, then proceeding to rain and finally back to sun again at the end of the week.

Collaboration without boundaries

The theme of the conference was introduced by Jonathan Bryce in his keynote – collaboration without boundaries. This collaboration is one of the things that has always drawn me to open source, and was in clear evidence throughout the event – in the exhibition hall and in the sessions. Companies that ostensibly compete co-presented sessions, with a great example being Alexandra Settle from SUSE and Stephan Finucane from Red Hat presenting Working with Documentation, the OpenStack Way. In the exhibition hall, competing companies companionably chatted with each other, while the PTG saw individuals from all around the open source world collaborating together on code, documentation, special interest groups and more.

The SUSE Spa

This event saw the SUSE Spa pay Denver a visit. Our message was simple but powerful – software-defined infrastructure, and open infrastructure in particular doesn’t have to be stressful. This is something that SUSE have been doing for over 25 years now – starting with making Linux easier for enterprises, and since have extended into Ceph, OpenStack, Kubernetes, Cloud Foundry and more. All backed by SUSE Support, and with the knowledge that everything we offer is fully open source (as you’d expect from the open, open source company).

John, Andrew and the team were demonstrating how easy it is to deploy SUSE CaaS Platform (Kubernetes) on top of SUSE OpenStack Cloud on a bare metal environment, and how you could then use SUSE Cloud Application Platform for application delivery across not just this platform, but others including public cloud, too.

In addition to this, the SUSE Spa staff were giving away goodies to help relieve stress at your desk, ranging from USB massagers, to foot massagers, shiatsu back and shoulder massagers and even a massaging chair cover. We also had a massage therapist at our booth to give chair massages to attendees – this proved to be very popular, giving attendees the chance to get off their feet and to have a delightful back, shoulder and neck massage. The plush Geeko chameleons were also one of the prizes on the SUSE Spa, with attendees queueing to get hold of one to take home to their desk/child/pet!

Cloud Nine

SUSE OpenStack Cloud 9 was pre-announced on April 2nd at SUSECON in Nashville, and was made available to customers on the first day of the Open Infrastructure Summit, April 29th. This is a very exciting time for us as it sees the HPE Helion OpenStack technology that we purchased in 2017 being fully incorporate into SUSE OpenStack Cloud with just a single, SUSE-branded release. This makes it easier for customers using earlier versions of HPE Helion OpenStack to upgrade to the latest iteration of SUSE OpenStack Cloud. It also introduced a new Day Two UI for customers selecting the Cloud Lifecycle Manager installation path, which has been designed to simplify post-deployment cloud operations, giving companies greater business agility to react quickly to changes in the market or in customer demand.

Airshippin’ across the universe

One of the pieces of big news announced in Denver was the release of Airship v.10, one of the pilot projects that the OpenStack Foundation announced in May 2017. This was eagerly anticipated as it makes it simpler for businesses to deliver cloud lifecycle automation via containers on bare metal. SUSE have been actively involved in the Airship project for a while now, have been contributing code upstream, and it will be a key part of our plans for future releases of SUSE OpenStack Cloud. We’ll be releasing a tech preview of a containerized OpenStack environment later this summer that uses Airship for lifecycle management – watch this space for details of how to get involved in the tech preview.

Try before you buy

Why not download the latest version of SUSE OpenStack Cloud and try it free for 60 days to see how you find it? If you’re concerned that your internal IT team might not have the time or skillset to set this up, then speak to the SUSE Support Team about SUSE Select Services. A 12-month, fixed-price service offering that can help you to jumpstart your SUSE OpenStack Cloud deployment, it also includes ongoing support and knowledge transfer to help your IT team learn what they need in order to build and operate a SUSE OpenStack Cloud. The SUSE Spa will be visiting Shanghai for the Open Infrastructure Summit in November, so if you’re attending that, please pop over to see us and learn about how SUSE can take the stress out of SDI for you and to admire our bright green Crocs (which were quite a talking point in Denver!).

Visit SUSE at Lenovo Accelerate 2019 for a datacenter transformation!

Monday, 6 May, 2019

Yes, the city of Orlando is busy those days.

We will be just checking out of SAP SAPPHIRE to check in at Lenovo Accelerate.

Lenovo is one of our main strategic IHV partners at SUSE and we`ll cover some ground with them at SAPPHIRE focusing on:

  • HANA Migration;
  • Lenovo’s ThinkAgile HX platform running SUSE
  • SAP Data Hub on Lenovo ThinkSystem SR530 & SUSE CaaS Platform (based on Kubernetes).

 

We will then gear up to meet our Lenovo colleagues again in Orlando along with their customers and business partners.

Accelerate + Transform = 1 event

Lenovo is combining their largest Business Partner event, Accelerate, with their flagship global customer event, Transform, to bring us one all-inclusive event.

They promise a high-energy, collaborative experience so we are very excited!

Whether you are a Lenovo customer trying to learn about their cutting-edge innovations, or a Lenovo Business Partner trying to take full advantage of their programs and tools, we are there for you.

We understand customers want to find the perfect fit for their business and partners want to maximize their profits, so we are in it together!

Visit SUSE

While in Orlando next week, we will not only cover our SAP-focused joint solutions I mentioned above but also:

 

Visit our booth to chat, get a plush chameleon and answer our quiz card for a chance to win a cool prize.

Send us a message at lenovo@suse.com to schedule a meeting with our team and visit suse.com/Lenovo to learn more about our Strategic Alliance.

Busting S/4HANA Transformation Myths

Thursday, 2 May, 2019

Guest blog by Gerd Hagmaier, the Global VP S/4HANA at Datavard. Datavard is co-exhibiting partner at SAPPHIRE NOW, SUSE booth 2246. If you are attending SAPPHIRE NOW, please visit us.

Many say they are doing it, but not many have actually done it. Where are we with the mystique of S/4HANA?

According to SAP, by the end of last year roughly 4% of SAP customers were on S/4HANA in the productive environment. The remaining 96% need to move to S/4HANA until 2025. SAP has got roughly 100 go-lives a month. Based on those statements and research by Gartner, we expect the peak of the S/4HANA migration wave in 2021 but we see that more and more customers are already preparing for the transition.

We have heard stories about S/4HANA transformation done in three months. How much time do you really need to run a migration to S/4HANA, including all the preparation?

I haven’t seen any project delivered in under 12 months. This is the reasonable timespan that the customer should take into account, especially if they have a high SAP footprint – meaning that they have lots of data, including old or even corrupted data that need to be cleaned up prior to migration. I’ve also heard about S/4HANA implementations done in 3 months, but I personally believe those are marketing stories or net new customers that are introducing SAP from scratch.

Why is the S/4HANA topic is so challenging?

Because it is not clear to everyone what are the advantages of the new system. If you ask the business about the benefits of S/4HANA, the feedback is not overwhelming. It is a technical change, and that’s why it is difficult to build a business case. With S/4HANA you can make your system leaner, especially if you have a substantial footprint. You also have the chance to optimize processes and reduce the number of your productive systems. But on the other hand, the hardware needed is also more expensive, so it is difficult to make a good business case.

The other hurdle is the transformation itself. Customers are not sure which transformation path they should choose. You have three options, but during the SAPPHIRE NOW event happening in May in Orlando, SAP will announce that they will focus mainly on two scenarios: new implementation and system conversion.

Last but not least, most customers haven’t decided yet what should they do as part of their S/4HANA migration. Should they run it as a purely technical project or should they also work on data volume issues they have in the current system, or adapt the nomenclature? Many of those questions are being brought up, so the project scoping also poses a challenge.

What is the best way to approach those challenges?

Both SAP and their partners are more experienced now and there are more offerings available on the market on this topic. You can now get a good analysis of your system and guidance on the best course of action. For example, Datavard offers S/4HANA FitnessTest which analyzes data quality and archiving potential. This information helps you to identify your next tasks and it is a very solid foundation for further decisions.

Customers know that they need to tackle the transition topic soon, but who is really ready for the transition?

A perfect customer has a clear goal and reasonable scope of the project which can be properly managed by both IT and business. Unfortunately, it often happens that customers put too many topics into the project scope, and then they recognize at the start that they cannot handle it due to their current workload. It’s key to make a realistic project scope that can be managed by your team or have appropriate partners that can support you.

Does it mean that your resources and people are a key factor?

Absolutely. So far, when talking about transformation we have been going on about processes, data, hardware sizing and so on. But we shouldn’t forget that we also need to move to this new environment. This is a challenging topic, that’s why Datavard is providing services to prepare also the team for this journey to S/4HANA.

And when moving to S/4HANA, many customers need to decide whether to go with premise or cloud solution. What are your thoughts on that?

Often, customers choose to bring some of the systems (quality system, development system) directly to the cloud. Also, if you start with the projects and you would like to get the look and feel of your SAP system in S/4HANA, cloud offering helps you to get this without investing into hardware. In this area we have a very good collaboration with SUSE who help us to set this up.

About the author

Gerd Hagmaier is the Global VP S/4HANA at Datavard. Previously he worked at SAP where he was responsible for the development of the S/4HANA topic at the Enterprise Architect.