Microsoft Azure and SUSE High Availability – When Availability Matters

Friday, 13 September, 2019

This blog was written based on the SUSECON 2019 presentation given by Stephen Mogg, Technical Strategist for SAP and Public Cloud and Mark Gonnelly, Senior Consultant for SUSE Consulting.

High Availability in the Cloud

Microsoft Azure is one of the best hyper-scale, enterprise-grade, hybrid cloud platform available. In fact, it’s the only global public cloud service provider to offer SLAs. But, where do you turn when your business needs a higher SLA?

When High Availability Matters

SUSE High Availability Extension  is the perfect companion to keep your business critical systems up and running. While Azure provides superior reliability and security for cloud computing, SUSE High Availability uses open source high availability clustering technology.  This adds an extra layer of protection against downtime. Clustering helps safeguard your workloads from systems failure and increases services availability, either through greater reliability, redundancy or fast failover to standby systems.

What’s the Big Deal About Clustering?

Clustering seems like a very complex and daunting system, and to a degree, it is. If you look at SUSE High Availability documentation, there are lots of intricate diagrams with a lot of moving parts – but when you break it down, there are really only two parts: Corosync and Pacemaker.

Corosync is the cluster membership communication layer. It’s the piece that the nodes use to talk to each to confirm that they’re all still up and running.

Pacemaker sits on top of Corosync and acts as the resource manager. It’s the piece that does all the work. It continuously monitors the system, manages dependencies and via a set of scripts, it automatically stops/starts and migrates services based on whatever rules and policies you have configured.

Pacemaker is the resource manager, and like most managers, Pacemaker has a group of employees, if you will.  These “employees” are called “Resource Agents” (RAs). RAs give Pacemaker information about the cluster so it will know when to stop, start and/or migrate a resource. Resource agents provide “intelligence” to Pacemaker.

Next we have Fencing. Why do we need it? For one reason, loss of a peer node is indistinguishable from loss of communication with that node. There is a big difference in a node being down because it is physically broken and a node being down because of a network failure. When the state of a node or resource cannot be established with certainty, fencing comes in. Even when the cluster is not aware of what is happening on a given node, fencing can ensure that the node does not run any important resources. Fencing is about moving from an UNKNOWN state to a KNOWN state.

Implementing High Availability in the Cloud

Now that we have discussed the basic components of SUSE High Availability – we need talk about how to implement that technology inside Azure. The first thing to note is that whether you get clustering or not will depend on how you purchase SUSE in Azure. You can buy it either via the Azure marketplace, or as “bring your own subscription” (BYOS).

There is the standard SUSE Linux Enterprise Server and SUSE Linux Enterprise Server for SAP. If you buy SUSE Linux Enterprise Server for SAP, the High Availability extension is included. If you buy the standard SUSE Linux Enterprise Server  through the Azure marketplace, you have no ability to add anything on to it.  That is, you can’t get the High Availability extension to provide protection to your applications, so you’ll have to use the BYOS model. Keep that in mind if you want to provide High Availability capabilities to some applications.

Additionally, there are certain technical specifications when considering clustering in a public cloud, like Azure. It’s not impossible – but there are differences. Fencing is one such example.  The most popular form of fencing is Shared Block Device (SBD), and most cloud providers don’t allow a raw block device to be shared between multiple VMs. Also, when it comes to shared storage (NFS/SNB), you might get NFS or you might not – it depends on your public cloud provider. So, there are bits to finagle when using the cloud, as opposed to on premise, when it comes to clustering. There are Corosync changes to make – as well as fencing roles and permissions to tweak, as mentioned earlier.

The More You Learn…

For more detailed technical information, as well as best practices, links to SUSE and Microsoft resources and a demo of how it all works together, watch the video from SUSECON ’19 below or check-out the PDF presentation here.

https://youtu.be/axyPUGS7Wu4

Join Us in Dublin!

If this has piqued your interest, why not plan to attend the next SUSECON! Registration for SUSECON2020 is now open! Stay on the cutting edge of what’s happening in open source technology.  Register today and attend next year’s conference in Dublin, Ireland.

A new era in Cloud Native Application Delivery is here

Tuesday, 10 September, 2019

Whether it is about seizing new market opportunities or proactively responding to competitive pressures or driving operational efficiencies that give you a competitive edge, organizations can no longer wait months on end to execute on any of those goals. Agility is indeed the name of the game.

And this is the era of the application economy. From your customers to your business partners to your employees, increasingly, applications are the vehicles through which information and services are being consumed, and critical business needs being addressed. As someone rightly said – “In this era of the application economy, every company is a software company.”

So it should come as no surprise that a key source of competitive advantage for most organizations today is the agility in delivering applications to production. At SUSE, we understand this need. We are continuing to enhance the delivery of modern containerized and cloud native applications through our application delivery solutions.

Launching SUSE Cloud Application Platform 1.5 and SUSE CaaS Platform 4

I am pleased to announce the latest updates to our application delivery portfolio, consisting of SUSE Cloud Application Platform and SUSE CaaS Platform. With these updates, we continue to provide and support solutions to create, deploy and manage workloads anywhere – on premise, hybrid and multi-cloud – with exceptional service, value and flexibility. The enhanced solutions provide exceptional experiences to both providers and consumers of Kubernetes-based application delivery platforms.

  • With SUSE CaaS Platform 4, we are the first to provide enterprises with advanced networking for Kubernetes based on the Cilium open source project. Leveraging Cilium, SUSE enables Kubernetes users to strengthen application security at scale with high performance packet filtering and network communication security policies that are easy to implement and control.

Check out this blog by Christopher Lentricchia to find out all the other innovations that SUSE has introduced with SUSE CaaS Platform 4.

  • With SUSE Cloud Application Platform 1.5, we introduce new application discovery and deployment capabilities that allow users to quickly and easily deploy applications and services that have been published as Helm charts, including hundreds of popular open source DevOps tools and ISV solutions as well as internally developed applications and services.

And there’s more – check out this blog by Troy Topnik to find out what’s new with SUSE Cloud Application Platform 1.5.

SUSE CaaS Platform 4 and SUSE Cloud Application Platform 1.5 will be available within 30 days.

If your organization is ready to leverage application delivery as a growth driver, SUSE can help you modernize and accelerate delivery of new and existing applications in your environment. For more information about these solutions, visit suse.com/solutions/application-delivery

Don’t forget to check out SUSE’s press release regarding the updates to SUSE Application Delivery Solutions.

3 Infrastructure Compliance Best Practices for DevOps

Tuesday, 10 September, 2019

For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

However, businesses in every industry need to consider compliance, whether maintaining compliance to the latest OS patch levels to avoid the impacts of the latest security virus or compliance for software licensing agreements to avoid contract breaches. Without compliance, the business puts itself at risk for a loss of customer trust, financial penalties, and even jail time for those involved.

When examining potential vulnerabilities in IT, there are three dimensions that guide an effective compliance program: security compliance, system standards, and licensing or subscription management.

Security compliance typically involves a dedicated department that performs audits to monitor and detect security vulnerabilities. Whether a threat is noted in the press or identified through network monitoring software, it must be quickly remediated. With new threats cropping up daily, protecting the business and its sensitive data is critical.

For system standards compliance, most IT departments define an optimal standard for how systems should operate (e.g., operating system level, patch level, network settings, etc.).

In the normal course of business, systems often move away from this standard due to systems updates, software patches, and other changes. The IT organization must identify which systems no longer meet the defined standards and bring them back into compliance.

The third dimension of compliance involves licensing or subscription management which reduces software license compliance concerns and unexpected licensing costs. Compliance in this area involves gaining better visibility into licensing agreements to manage all subscriptions and ensure control across the enterprise.

To mitigate risk across the business in all three dimensions of compliance, the IT organization needs infrastructure management tools that offer greater visibility, automation, and monitoring. According to Gartner’s Neil MacDonald, vice president and distinguished analyst, “Information security teams and infrastructure must adapt to support emerging digital business requirements, and simultaneously deal with the increasingly advanced threat environment. Security and risk leaders need to fully engage with the latest technology trends if they are to define, achieve, and maintain effective security and risk management programs that simultaneously enable digital business opportunities and manage risk.”

Best Practice #1:

Optimize Operations and Infrastructure

With so many facets to an effective compliance program, the complexity of the IT infrastructure makes compliance a difficult endeavor. One of the most significant implications of a complex infrastructure is the delay and lack of agility from IT in meeting the needs of business users, ultimately driving an increase in risky shadow IT activities.

As business users feel pressure to quickly exceed customer expectations and respond to competitive pressures, they will circumvent the internal IT organization altogether to access services they need. They see that they can quickly provision an instance in the public cloud with the simple swipe of a credit card.

These activities pose a threat to the organization’s security protections, wreaks havoc on subscription management, and takes system standard compliance out of the purview of IT.

Optimizing IT operations and reducing infrastructure complexity go a long way toward reducing this shadow IT.

With an efficient server, VM, and container infrastructure, the IT organization can improve speed and agility in service delivery for its business users. An infrastructure management solution offers the tools IT needs to drive greater infrastructure simplicity. It enables IT to optimize operations with a single tool that automates and manages container images across development, test, and production environments, ensuring streamlined management across all DevOps activities.

Automated server provisioning, patching, and configuration enables faster, consistent, and repeatable server deployments. In addition, an infrastructure management solution enables IT to quickly build and deliver container images based on repositories and improve configuration management with parameter-driven updates. Altogether, these activities support a continuous integration/continuous deployment model that is a hallmark of DevOps environments.

When DevOps runs like a well-oiled machine in this way, IT provisions and delivers cloud resources and services to business users with speed and agility, making business users less likely to engage in shadow IT behaviours that pose risks to the business. As a result, compliance in all three dimensions—security, licensing, and system standards—is naturally improved.

Best Practice #2:

Closely Monitor Deployments for Internal Compliance

In addition to optimizing operations, improving compliance requires the ability to easily monitor deployments and ensure internal requirements are met. With a single infrastructure management tool, IT can easily track compliance to ensure the infrastructure complies with defined subscription and system standards.

License tracking capabilities enable IT to simplify, organize, and automate software licenses to maintain long-term compliance and enforce software usage policies that guarantee security. With global monitoring, licensing can be based on actual data usage which creates opportunities for cost improvements.

Monitoring compliance with defined system standards is also important to meeting internal requirements and mitigating risk across the business. By automating infrastructure management and improving monitoring, the IT organization can ensure system compliance through automated patch management and daily notifications of systems that are not compliant with the current patch level.

Easy and efficient monitoring enables oversight into container and cloud VM compliance across DevOps environments. With greater visibility into workloads in hybrid cloud and container infrastructures, IT can ensure compliance with expanded management capabilities and internal system standards. By managing configuration changes with a single tool, the IT organization can increase control and validate compliance across the infrastructure and DevOps environments.

Best Practice #3:

Improve Visibility of Systems and Deployments for Greater Security

The fundamental goal of any IT compliance effort is to remedy any security vulnerabilities that pose a risk to the business. Before that can be done, however, IT must audit deployments and gain visibility into those vulnerabilities.

An infrastructure management tool offers graphical visualization of systems and their relationship to each other. This enables quick identification of systems deployed in hybrid cloud and container infrastructures that are out of compliance.

This visibility also offers detailed compliance auditing and reporting with the ability to track all hardware and software changes made to the infrastructure. In this way, IT can gain an additional understanding of infrastructure dependencies and reduce any complexities associated with those dependencies. Ultimately, IT regains control of assets by drilling down into system details to quickly identify and resolve any health or patch issues.

Conclusion

The DevOps has the potential to fundamentally change the way the business develops and delivers services. Despite the agility and flexibility DevOps can offer, complex IT infrastructures limit innovation and complicate compliance activities. To achieve three-dimensional compliance for optimal subscription usage, system standards, and security, the IT organization can improve simplicity and limit complexity with infrastructure management tools.

By automating management, streamlining operations, and improving visibility, these tools help IT optimize the environment for innovation, increase monitoring for internal compliance, and gain great visibility into the security of systems and deployments. Ultimately, the business achieves the flexibility and agility offered by DevOps and builds a future defined by innovation while ensuring compliance across the enterprise.

Learn more https://www.suse.com/solutions/it-infrastructure-management/

Managing Compliance With SUSE Manager

Code Commits: only half the story

Monday, 5 August, 2019

It’s not the first time I’ve been asked by a sales rep the following question: “The customer has looked at Stackalytics and is wondering why Rancher doesn’t have as many code commits as the competition. What do I say?”

For those of you unfamiliar with Stackalytics, it provides an activity snapshot, a developer selfie if you will, of commits and lines of code changed in different open source projects. Although a very worthwhile service, some vendors like to use it as proof of their technical prowess and commitment to an open-source project’s ecosystem.

But does the number of code commits by a vendor tell the full story?

Certainly, some would argue that it does. For example, whilst working at Canonical, I regularly came across customers who’d ask us why we made relatively few commits to upstream OpenStack when compared to other vendors. This was despite the Ubuntu OpenStack distribution being used by just about everybody within the community. It seems that now, at Rancher, we’re being asked to justify our Kubernetes credentials by a similar measure despite the fact that our eponymous Kubernetes management platform has been downloaded over 100,000,000 times.

Perhaps those evaluating vendors should be asking different questions like:

  • Is it possible that some vendors hire teams of engineers to focus solely on developing code for upstream Kubernetes?

  • As a customer, will you get access to the engineering expertise needed to make those code commits?

  • Does more upstream code commits mean that the vendor’s Kubernetes management platform is better than competitive products?

  • Is the vendor with the most code commits more engaged with the Kubernetes community than everyone else?

At every tradeshow I’ve been to this year, community members have come to the booth to thank me for the Rancher platform and what Rancher Labs does for the Kubernetes eco-system. They don’t care about code commits, they care about the business value we deliver.

Rancher helps tens of thousands of teams be successful with Kubernetes. Without it they couldn’t easily realise advanced DevOp capabilities like continuous delivery, canary/blue/green deployments, service autoscaling, automated DNS & load balancing, SSL and certificate management, secret management… etc. It’s these capabilities (plus not being locked into a single vendor ecosystem) that deliver extraordinary value to end users, their employers and to the wider Kubernetes community. Best of all – they don’t have to pay for it!

It’s also worth remembering that contributing to a large open source community like Kubernetes isn’t a single-threaded experience. k3s was launched by Rancher in March 2019 to huge excitement. k3s is a Kubernetes distribution designed to run production workloads in remote, resource constrained locations like in IoT devices or the network edge. Although the project isn’t measured by Stackalytics’ code commit counter, k3s amply demonstrates Rancher’s technical leadership and commitment to helping enterprises deploy Kubernetes from their core infrastructure to the network edge.

Building an Enterprise Kubernetes Strategy

For more information on how Rancher can help you build an enterprise Kubernetes strategy, download our recent whitepaper.

No More Sleepless Nights and Long Weekends Doing Maintenance

Wednesday, 31 July, 2019

This blog was written based on the SUSECON 2019 presentation given by Raine Curtis, North America Core Services Team Lead, SUSE Global Services and Stephen Nemeth, Senior Architect, SUSE Global Services.  

Future Server Room

Datacenter maintenance – you dread it, right? Staying up all night to make sure everything runs smoothly and nothing crashes, or possibly losing an entire weekend to maintenance if something goes wrong. Managing your datacenter can be a real drag. But it doesn’t have to be that way.

At SUSECON 2019, Raine and Stephen discussed how SUSE can help ease your pain with SUSE Manager, a little Salt and a few best practices for datacenter management and automation.

SUSE Manager and Maintenance

So, let’s talk about SUSE Manager and how it can help with the patching lifecycle. Why do you need to manage your patching lifecycle?

If you already have SUSE Manager, you know that every night it goes out to SUSE’s update website and grabs updates and pulls them down and mirrors them into your SUSE Manager. And that’s great because you’re always getting updates, right?

Well… not exactly. Constant, unchecked updates can be a problem because you want stability in your environment.  So what’s the answer?  How do manage your patching lifecycle?

Patching lifecycle management does just that – it manages your patches. It takes a snapshot of your patches and knows “these are patches for development (DEV),” (or production (PROD), quality assurance (QA), user acceptance/testing (UAT), etc).

Then when you are ready, promote them into the appropriate landscape.  This gives you time to do the testing before you promote them into production.  This means you are in control and your system is a lot more stable.

But I Only Have a Few Servers

Now, you may be thinking – “but I only have a few servers, do I really need a patch management system?” That depends. What would happen if just one bad patch hits one of those servers? It’s highly likely that you would have a prod down, your business would go down and it could cost you hundreds of thousands of dollars. SUSE Manager helps you control that and run through different lifecycles so you always know what patches to prepare for – giving you control of when you want to deploy them.

Don’t Forget to Add the Salt

For even more automation, SUSE Manager can use Salt as its configuration manager. This lets you implement Salt “states” within an interface. States are templates which place systems into a known configuration, for example which applications and services are installed and running on those systems. States are a way for you to describe what each of your systems should look like. Once written, states are applied to target systems automating the process of managing and maintaining a large numbers of systems into a known state. It’s basically a yaml file that describes the state you want a particular system in.

Salt is a broad spectrum tool that lets you to manage your states within the UI, as well as configuration management and version control.

Used together, SUSE Manager and Salt automates some of your most dreaded maintenance processes and makes your life a lot easier.

Learn More

For a deeper dive into more benefits and best practices, check out the “Software-defined Datacenter Maintenance: No more sleepless nights and long weekends when doing maintenance” session from SUSECON 2019 below.

You can also download a PDF of the slide presentation here.

If you want a closer look into Patch Lifecycle Management with SUSE Manager, you can take a look at the Advanced Patch Lifecycle Management with SUSE Manager guide.

And don’t forget to check out SUSE Start for SUSE Manager to help jumpstart your implementation of SUSE Manager; find out more here.

The Road to Agile IT is Paved with Containers

Tuesday, 30 July, 2019

The holy grail for any CMO looking for their next gig is to find the
perfect combination of addressable market, market timing, company, and
product. That’s why I am so excited to be joining the team at Rancher
Labs, the leader in container management software. Let’s look at all the
variables.

Market Opportunity & Timing

The market for containers is conservatively HUGE! What’s a
container? A container is a standard unit of software that packages up
code and all associated dependencies enabling an application to run
quickly and reliably from one computing environment to another. For
example, development teams are using containers to package entire
applications and move them to the cloud without the need to make any
code changes. Another example, containers make it easier to build
workflows for modern applications that run between on-premises and cloud
environments.

While containers are a good way to bundle and run your applications, you
also need to manage the containers that run the applications. That’s
where Kubernetes comes in. Kubernetes is an open source container
orchestration engine for automating deployment, scaling, and management
of containerized applications. Recent research indicates that
approximately 40% of enterprises are running Kubernetes in production
today, but in less than three years that number will increase to more
than 84%!

As infrastructure increasingly moves to multi-cloud (e.g. on-premises,
AWS, GCP, Azure) and enterprise applications become more complex,
development and IT operations teams need an effective way to manage
Kubernetes at scale.

Therein lies the opportunity!

Company and Product

If you don’t know already, Rancher Labs builds innovative, open source
software for enterprises leveraging containers to deliver
Kubernetes-as-a-Service. Rancher was founded by a group of cloud and
open source thought leaders who have already
made their mark at places like Cloud.com, Citrix, and GoDaddy. They
foresaw the need and created our flagship Rancher platform, which allows
users to easily manage all aspects of running Kubernetes in production,
on any infrastructure across the data center, cloud, branch offices and
the network edge.

Unlike solutions from competitors like Red Hat and Pivotal, our solution
delivers the ideal balance of flexibility and control, including:

  • Multi-Cluster Application Support: Kubernetes users can deploy and maintain their applications on multiple clusters from a single action, reducing the load on operations teams and increasing productivity and reliability for businesses running in hybrid-cloud, multi-cloud, or multi-cluster Kubernetes environments.
  • Support for Cloud Native Kubernetes Services: In addition to offering two certified Kubernetes distributions (RKE and k3s), Rancher provides complete flexibility by enabling enterprise customers to manage any Kubernetes distribution and any cloud-native Kubernetes service such as GKE, EKS, and AKS. For users, every Kubernetes cluster behaves the same way and has access to all of Rancher’s integrated workload management capabilities.
  • No Vendor Lock-In: As free and open source software, Rancher costs much less to own and operate than PKS and OpenShift while providing a more capable product that doesn’t lock you into any single vendor’s ecosystem.

Addressable market? Check! Market timing? Check! Company? Check!
Product? Check!

It doesn’t get any better than that!

While I am privileged to join Rancher, I am merely one small cog in the
big wheel of their momentum. Check out what’s happened since the start
of 2019 alone:

  • Customer Growth: We grew our customer base by 52% while YoY revenue grew 161%.
  • Product Innovation: We introduced major enhancements to Rancher with the release of version 2.2 and also launched new open source projects:
  • Funding – we raised another $25M in Series C funding, bringing the total amount raised to $55M. That means we’ve got loads of cash to invest in continuing our rapid growth.

You can read all about our momentum here, or to learn more, jump to
www.rancher.com.

#RunKubernetesEverywhere!

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

Kubernetes Adoption Driving Rancher Labs Momentum

Tuesday, 23 July, 2019

This week Rancher Labs announced a record 161% year-on-year revenue growth, along with a 52% increase in the number of customers in the first half of 2019. Other highlights from H1’19 included:

  • Closure of a $25M series C funding round
  • Doubling of international headcount as we continue our expansion into 12 countries
  • Software downloads surpassed 100 million making Rancher the industry’s most widely adopted Kubernetes software platform
  • General availability of Rancher 2.2
  • Continued investment in open source projects including Rio, Longhorn, k3s, and k3OS

You can find the complete release here.

We are grateful to our community of customers, partners, and users for the growth we achieved in the first half of 2019, and we will continue to gauge Rancher’s success in the larger context of enterprise adoption of Kubernetes. Rancher will continue to deliver value by enabling organizations to deploy and manage Kubernetes across their entire infrastructure.

Kubernetes Everywhere

Recent research reports that approximately 40% of enterprises are running Kubernetes in production today, but in less than three years that number will increase to more than 80%. What will drive that growth? Kubernetes helps organizations significantly increase the agility and efficiency of their software development teams, while also helping IT teams boost productivity, reduce costs and risks, and it moves organizations closer to achieving their hybrid-cloud goals.

As container usage becomes more widespread across an organization, balancing the needs of developers who want autonomy and agility with the needs of IT teams who want consistency and control can prove challenging. Whether your organization builds large clusters of infrastructure and then offers development teams shared access to them, or leaves individual departments or DevOps teams to decide for themselves how and where to use Kubernetes, it is not uncommon for tension to develop between those wanting to run Kubernetes in exactly the way they need it and IT teams that want to maintain security and control over how Kubernetes is implemented.

Rancher’s Role in Enabling Everywhere

Only Rancher is purpose-built to address the requirements of both developer teams and IT operations teams, thereby enabling organizations to deploy and manage Kubernetes at scale.

Here’s how:

  • Simplified Cluster Operations – In addition to offering two certified Kubernetes distros (RKE and k3s), Rancher enables enterprise customers to utilize any Kubernetes distribution or hosted Kubernetes service. Customers can use cloud-native Kubernetes services such as GKE, EKS, and AKS. By supporting any Kubernetes distribution or service, Rancher enables customers to implement Kubernetes in the most cost-effective way and operate Kubernetes clusters in the simplest way possible, while still leveraging the consistency of Kubernetes across all types of infrastructure.

  • Security & Policy Management – Rancher provides IT organizations with centralized management and control over all Kubernetes clusters, regardless of how they are implemented or operated. By managing security policies for all of your Kubernetes clusters in one place, Rancher minimizes human error and wasted energy. Rancher’s unified web UI replicates all functionality available within Kubernetes and includes tooling for Day Two operations. Full control via CLI and API is also available. Rancher is simple to install in any environment, integrates with user authentication platforms, and quickly starts to address many of the workflow challenges experienced by developer and operations teams who work with Kubernetes. A single Rancher installation can manage hundreds of Kubernetes clusters running on-premise or in any cloud. This provides technical teams with a seamless development experience and helps business leaders adopt a multi-cloud or hybrid-cloud strategy.

  • Shared Tools & Services – Rancher provides a rich set of shared tools and services on top of any Kubernetes cluster. Rancher ships with CI/CD, monitoring, alerting, logging, and all the tools needed to make your Kubernetes clusters immediately useful. Less time spent worrying about your infrastructure means more resources to invest in the accelerated delivery of innovative cloud-native applications.

So, while we are proud of our success in the first half of 2019, we are even more excited about the future! As Kubernetes continues to proliferate and grow in complexity, organizations will increasingly rely upon solutions like Rancher that enable them to run Kubernetes EVERYWHERE!

To learn more about Rancher, check us out at www.rancher.com.

For an introduction to Kubernetes, join an upcoming online training session.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Join the Best for your SAP Digital Core on IBM Power Systems

Monday, 15 July, 2019

To win a relay race, you not only need top performing runners but also teammates who work well together. No matter how fast the runners are for their respective legs in the race, working together for well-executed hand-offs is critical to the win. Similarly, you need a strong team that works together seamlessly for your transition to SAP HANA.

The SAP Digital Core is the underlying infrastructure that makes it possible to transform your business into an “Intelligent Enterprise”. At its center is SAP HANA. SAP originally developed the HANA database on Linux (specifically, SUSE Linux Enterprise Server), and over the last several years have delivered a suite of solutions built on open source technologies. I’ve written a brief article about the open source solutions for SAP environments in this month’s Power Systems edition of IBM Systems Magazine. In it, I talk about how this strategy gives you, the SAP customer, a broader choice of technologies and vendors. The article covers the technologies but here I want to tell you why SUSE should be your open source partner for running SAP solutions to, as we say, Join the Best in your market.

 

Trusted and Preferred

SAP has trusted SUSE for 20 years, delivering products and services like HANA Cloud Platform, SAP Cloud Platform and SAP Data Hub on SUSE products before they are broadly available on other platforms. SAP is also a customer, trusting SUSE solutions for their in-house implementation platform.

SAP customers overwhelming prefer SUSE solutions for SAP HANA. We have over 30,000 SAP customers and more than 100 customer success stories. In the four years since SAP validated HANA on Power servers, over 2000 customers have chosen SUSE Linux Enterprise Server for SAP Applications for their implementations.

 

Open and Flexible

SUSE is now officially the world’s largest independent open-source software provider and we remain committed to helping you retain control and flexibility. We delivers solutions that help customers to avoid vendor lock-in and retain control. SUSE solutions are always open and flexible, remaining true to the original vision of open source software but recognizing that the real world isn’t that simple. I mean, let’s face it: your SAP infrastructure is probably going to be composed of mixed technologies for a while. For example, many of our customers running SAP HANA on Power Systems have their NetWeaver applications in a separate AIX LPAR. You probably also have a strategy to leverage the Hyperscalers for service delivery in the cloud. All SUSE solutions for SAP environments are fully open and made available upstream to the community with open APIs. Not all vendors do this, which is why we call ourselves “The Open, Open Source Company.”

By the way, our open source approach and flexibility is one reason that SAP consistently turns to SUSE when implementing new solutions.

 

Innovative

SAP and SUSE have been co-innovation partners for 20 years. We delivered the first OS distribution designed specifically for SAP systems the year before SAP HANA was available in the market. Even back then, it had features to improve downtime, maintain security, boost performance, and speed up deployment. In every year since then we’ve been the first to introduce new features and solutions that are ideal for SAP environments, including stepping up to deliver a working Linux distribution for SAP HANA on Power Systems.

 

Learn more

On September 5, 2019, SUSE will host an IBM Systems Media webinar titled “Making the Best Choice for your SAP HANA Infrastructure.” Click here to register for this webinar, where our very own IBM Champion Jay Kruemcke and I will discuss:

  • how you can align with SAP’s open source strategy
  • why SUSE is trusted and preferred for SAP infrastructures
  • SUSE innovations for SAP systems, especially those on IBM Power Systems

Follow us on Twitter @MichaelDTabron and @mr_sles.

Does SAP Migration to Cloud have to take forever?

Thursday, 11 July, 2019

The short answer is NO. FFF Enterprises, a US pharmaceutical company – if you live in the US, this is where your flu vaccine came from, needed to improve their IT operations and reduce their costs. Their on-premises SAP servers were not holding up to the stress the business demanded. Knowing they needed to do something and do it quickly, they called on Managecore – experts in SAP, Google Cloud Platform, and SUSE aficionados. The folks at Managecore were able to complete the SAP migration from on-premises, with an O/S that shall not be named, to Google Cloud – using the number 1 O/S for SAP, SUSE Enterprise Linux Server. And they did it in 12 weeks!.

Join the ASUG event on Oct 3, 2019 at 1:00 pm – 2:00 pm CDT find out more. Register here!

Google liked it so much, they created a reference.

SUSE partner Managecore crossed the finish line to Join the Best!