Did you consider the day pass option for SUSECON 2017?

Monday, 10 July, 2017

In preparation for SUSECON I often get the feedback that SUSECON is to technical for some of the managers, CxO’s from customers and partners we talk to. And although SUSECON provides a lot of business related sessions, use cases and customer references, 5 days of Open Source Technology can be a bit overwhelming and does not always fit in the agenda properly.

But let’s not forget that SUSECON also provides a day pass option. An easy way to get access, experience key notes, visit sessions that are most relevant for you that day, do some networking and get updated on the technology showcase floor.

Let’s take an example where the day pass option can be of value. A partner for instance can subscribe to:

  1. The partner summit on Monday.
  2. Buy a day pass for Tuesday

 

This way the partner gets the full SUSECON experience and might even bump into one of the SUSE executives while he/she gets updated on the technology showcase floor.

But there are many more examples. Think of a SAP infrastructure manager who needs to be updated on the latest and greatest what SUSE and SAP bring to market. They can now pick and choose the day with the SAP keynote (just an example) and follow the SAP related break-out sessions on roadmaps, SAP customers successes etc during the rest of the day!

Do hope to meet you in Prague. More information: www.susecon.com

Looking Back on LinuxCon China 2017

Friday, 30 June, 2017

The following article has been contributed by ChenZi Cao, QA Engineer, SUSE China. 

 

 

 

Last Monday and Tuesday, I attended LinuxCon 2017 in Beijing. The event took place at China National Convention Center – a really impressive building.

China National Convention Center

At the event we learned about the newest and most interesting open source technologies, including Linux, containers, cloud technologies and networking etc.

On Monday, during the kick off meeting, I got to know that open source is getting more and more popular. Many companies joined several open source initiatives and would like to contribute to open source communities on a regular basis. I also heard for the first time that online payment (e.g. Alipay), online shopping, bicycle sharing and high-speed rail are the “New Four Great Inventions” of China. After the keynotes, the official conference program started. Many interesting topics were shared by speakers from all over the world. I personally attended the presentations

and many other good talks.

On Tuesday, Linus Torvalds came to LinuxCon, so a lot of people who are interested in open source “stormed” the convention center very early to wait for this super star. One quote from Linus’ interview impressed me very much: “I like the feeling when I wake up. I have a job, a job that I’m interested in and it’s challenging but not such a burden on me”. I have a job I’m interested in, too 😊.

After the interviews with Linus, I had the chance to listen to more presentations, and I also went to visit some exhibition booths. And in that afternoon, I won a prize in a lottery:  a Polaroid camera! I had never won prize in lotteries before. A really successful event .

I’m glad I could join LinuxCon China in Beijing this year!

With Lenovo: Different is Better; with SUSE: We adapt. You succeed. -> Great combo, right?

Wednesday, 28 June, 2017

Last week, on June 20th, 2017 Lenovo launched the largest Data Center portfolio in their history to help customers harness the “intelligent revolution” and SUSE is all for it.

According to Kirk Skaugen, new President of Lenovo`s Data Center Group (DCG), “the new Lenovo ThinkSystem portfolio pulls together next-generation servers, storage and networking systems under a single unified brand instead of multiple brands coming from acquisitions. ThinkSystem is engineered to reliably and safely deliver demanding workloads such as real-time analytics, DevOps application services, and software-defined storage services.”

Also, since data centers are increasingly fluid and business needs are fast-evolving, Lenovo also announced ThinkAgile, a software-defined solutions portfolio designed for hybrid cloud, hyper-converged infrastructure, and software-defined storage.

We are very excited about this launch coming from such an important alliance partner and it may make you wonder:

How is SUSE positioned to support this new approach?”

Well… that is how:

 

Please visit Lenovo`s Executive Briefing Center in Raleigh, North Carolina where you`ll be able to see all of our joint solutions in full display.

Or, keep up with our Strategic Partnership at suse.com/lenovo

If you want more information on Lenovo`s launch, you can read Blogs below:

 

Moving Your Monolith: Best Practices and Focus Areas

Monday, 26 June, 2017

You have a complex monolithic system that is critical to your business.
You’ve read articles and would love to move it to a more modern platform
using microservices and containers, but you have no idea where to start.
If that sounds like your situation, then this is the article for you.
Below, I identify best practices and the areas to focus on as you evolve
your monolithic application into a microservices-oriented application.

Overview

We all know that net new, greenfield development is ideal, starting with
a container-based approach using cloud services. Unfortunately, that is
not the day-to-day reality inside most development teams. Most
development teams support multiple existing applications that have been
around for a few years and need to be refactored to take advantage of
modern toolsets and platforms. This is often referred to as brownfield
development. Not all application technology will fit into containers
easily. It can always be made to fit, but one has to question if it is
worth it. For example, you could lift and shift an entire large-scale
application into containers or onto a cloud platform, but you will
realize none of the benefits around flexibility or cost containment.

Document All Components Currently in Use

Our
newly-updated eBook walks you through incorporating containers into your
CI/CD pipeline. Download the
eBook

Taking an assessment of the current state of the application and its
underpinning stack may not sound like a revolutionary idea, but when
done holistically, including all the network and infrastructure
components, there will often be easy wins that are identified as part of
this stage. Small, incremental steps are the best way to make your
stakeholders and support teams more comfortable with containers without
going straight for the core of the application. Examples of
infrastructure components that are container-friendly are web servers
(ex: Apache HTTPD), reverse proxy and load balancers (ex: haproxy),
caching components (ex: memcached), and even queue managers (ex: IBM
MQ). Say you want to go to the extreme: if the application is written in
Java, could a more lightweight Java EE container be used that supports
running inside Docker without having to break apart the application
right away? WebLogic, JBoss (Wildfly), and WebSphere Liberty are great
examples of Docker-friendly Java EE containers.

Identify Existing Application Components

Now that the “easy” wins at the infrastructure layer are running in
containers, it is time to start looking inside the application to find
the logical breakdown of components. For example, can the user interface
be segmented out as a separate, deployable application? Can part of the
UI be tied to specific backend components and deployed separately, like
the billing screens with billing business logic? There are two important
notes when it comes to grouping application components to be deployed as
separate artifacts:

  1. Inside monolithic applications, there are always shared libraries
    that will end up being deployed multiple times in a newer
    microservices model. The benefit of multiple deployments is that
    each microservice can follow its own update schedule. Just because a
    common library has a new feature doesn’t mean that everyone needs it
    and has to upgrade immediately.
  2. Unless there is a very obvious way to break the database apart (like
    multiple schemas) or it’s currently across multiple databases, just
    leave it be. Monolithic applications tend to cross-reference tables
    and build custom views that typically “belong” to one or more other
    components because the raw tables are readily available, and
    deadlines win far more than anyone would like to admit.

Upcoming Business Enhancements

Once you have gone through and made some progress, and perhaps
identified application components that could be split off into separate
deployable artifacts, it’s time to start making business enhancements
your number one avenue to initiate the redesign of the application into
smaller container-based applications which will eventually become your
microservices. If you’ve identified billing as the first area you want
to split off from the main application, then go through the requested
enhancements and bug fixes related to those application components. Once
you have enough for a release, start working on it, and include the
separation as part of the release. As you progress through the different
silos in the application, your team will become more proficient at
breaking down the components and making them in their own containers.

Conclusion

When a monolithic application is decomposed and deployed as a series of
smaller applications using containers, it is a whole new world of
efficiency. Scaling each component independently based on actual load
(instead of simply building for peak load), and updating a single
component (without retesting and redeploying EVERYTHING) will
drastically reduce the time spent in QA and getting approvals within
change management. Smaller applications that serve distinct functions
running on top of containers are the (much more efficient) way of the
future. Vince Power is a Solution Architect who has a focus on cloud
adoption and technology implementations using open source-based
technologies. He has extensive experience with core computing and
networking (IaaS), identity and access management (IAM), application
platforms (PaaS), and continuous delivery.

Tags: , Category: Uncategorized Comments closed

Sweating hardware assets at Experian with SUSE Enterprise Storage

Tuesday, 6 June, 2017

When Experian’s Business Information (BI) team overseeing infrastructure and IT functions saw the customers’ demand for better and more comprehensive data insights grow at an unprecedented rate, the company required a better storage solution that would enable them to maintain the same performance level. Implementing the SUSE Enterprise Storage solution gave Experian a starting platform for seamless capacity and performance growth that will enable future infrastructure and data projects without the company having to worry about individual servers hitting capacity.

The Problem

As a company facing increasing customer demands for better and more comprehensive insights, Experian began incorporating new data feeds into their core databases, enabling them to provide more in-depth insights and analytical tools for their clients. Experian went from producing a few gigabytes a month to processing hundreds of gigabytes an hour. This deep dive into big data analytics, however, came with limitations – how and where would Experian store larger data-sets while maintaining the same level of performance?

From the start, Experian had great success running ZFS as a primary storage platform, providing the flexibility to alternate between performance and capacity growth, depending on the storage medium. The platform enabled them to adapt to changing customer and business needs by seamlessly shifting between the two priorities.

But Experian’s pace of growth highlighted several weaknesses: First off, standalone NASes platforms were insufficient, becoming unwieldy and extremely time-consuming to manage. Shuffling data stores between devices took days to complete, causing disruptions during switchovers. The second challenge was a lack of high availability – Experian had developed robust business continuity and disaster recovery abilities, but in the process, had given up a certain degree of automation and responsiveness. Their systems could not accommodate the customer demand for 24/7 real-time access to data created by the advent of APIs and the digitalization of the economy. Experian’s third and greatest challenge was in replicating data. Data would often fluctuate and wind up asynchronous, creating a precarious balance – if anything started to lag, the potential for disruption and data loss was huge.

Experian had implemented another solution exclusively in their storage environment that had proven to be rock solid and equally flexible. While the team was happy with its performance, the new platform failed to fully address the true performance issue and devices and controller cards would still occasionally stall. As a company in the business of providing quick data access, the lag time raised serious concerns and presented obstacles in meeting client and business needs.

The Solution

Experian only saw one real short-term solution and moved to running ZFS on SUSE Linux Enterprise. This switch saved Experian time to find a more durable resolution, but was also fraught with limitations. Experian spent a number of weeks trying to find a permanent solution that would protect both their existing investment and future budget. To fix the limitation issue, Experian temporarily added another layer above their existing estate that would manage the distribution and replication of data.

As Experian was preparing to purchase the software and hardware needed to provide a more long-term solution, they come across SUSE’s new product offering – SUSE Enterprise Storage, version 3. Based on an open source project called Ceph, SUSE Enterprise Storage offered everything Experian needed with file and block storage and snapshots to run well on their existing HPE DL380 platform. SUSE was already Experian’s operating system of choice for a few years, proving to be reliable, fast and flexible. SUSE support teams were also responsive and reliable – this new solution offered the perfect product to meet Experian’s need.

The Outcome Experian’s initial SES build was modest, based around four DL380s for OSDs and four blades as MONs. Added to that were two gateway servers to provide block storage access from VMWare and Windows clients. SUSE Enterprise Storage’s performance met and exceeded Experian’s expectations – even being a cross site, real-life IOPS easily go into thousands. The benefit to Software-defined storage is that it allows clients to abstract problems away from hardware and to eliminate the issue of individual servers hitting capacity. By adding more disks to make space for more data and adding another server when access has slowed down without having to pinpoint exactly where they need to go, capacity planning is much less of a headache for Experian. Software-defined storage also enables Experian to sweat their server hardware for longer, making budgeting and capacity planning easier.

While SES doesn’t replace the flash-based storage Experian uses for databases, having a metro-area cluster means that business continuity is taken care of. Experian ended up with is a modern storage solution on modern hardware that gives the company a starting platform for both seamless capacity and performance growth that enables future infrastructure and data projects

Agile Transition to SAP S/4HANA Using the SLO Approach

Thursday, 1 June, 2017

The transition from today’s SAP environments to SAP S/4HANA is an important initiative, but many companies are understandably concerned about getting it right. This is a guest blog by Jan Durinda of Datavard, a SUSE partner and an SAP partner who offers an innovative solution for a smooth transition for SAP data to the S/4HANA environment.

Posted on 31.05.2017 by Jan Durinda

How many companies in your region have migrated to SAP S/4HANA from their old SAP systems? Yes, you are right – not too many (comparing with overall number of SAP customers).

And why is that? Many of companies are waiting for peers to go first, to have at least few successful reference cases. Here are the top concerns that we hear from the decision makers:

  • it is a complicated process,
  • not yet 100% bulletproof and potential issues may occur,
  • existing brownfield solutions do not meet with our requirement and are not agile enough.

With more than 20 years of experience in SLO (System Landscape Optimization) projects, we managed to address this problem and design a simple and straightforward process for preparation and transition to SAP S/4HANA.

SLO approach enables companies to align their existing SAP system landscape after restructuring their business, updating existing processes, integrating a recently acquired company, or removing parts of the business data due to a divestiture from their existing SAP system landscape. SLO approach ensures full data consistency and data integrity within transformation projects such as data migrations and data conversions.

We upgraded the SLO approach and now it is supporting both SLO activities and transition to SAP S/4HANA. Moreover, these activities now can be combined and executed within one project.

5 steps for a smooth transition to SAP S/4HANA

Step 1 – Datavard Fitness test with focus on S/4HANA readiness checks

During this initial step, complete scan of the ERP system(s) is performed. Based on the results, we evaluate the overall readiness of system for migration to S/4HANA. This service supported by automated tools tells you precisely how and which business processes are impacted by S/4HANA. Major part of analysis is also component compatibility overview and detailed assessment of custom development with recommendation of all necessary adjustments. Moreover, Datavard Fitness Test helps you with calculation of size for hardware where we also consider the so called “Quick Wins” – housekeeping and archiving activities which lead to reduction of TCO

Step 2 – Creation of empty S/4HANA shell

We decided to save each possible effort on the customer side and make the whole transition as simple as possible, therefore using Datavard solution “Lean system copy” we bring all possible setup, customizing and custom development from original system directly to S/4HANA. Using this approach majority of setup work is done and only small trimming is required. Another huge advantage is that the setup can be done during uptime and therefore no daily business processes are influenced. Once S/4HANA shell is built, it is possible to simulate business processes which leads to better preparation of business users till “real” transition with all data will take place.

Step 3 – Pre-selection of data to be migrated

As it was already mentioned above, Datavard approach is based on the proven SLO technology which enables the SAP users to transfer the data they really need (e.g. certain company code) and get rid of obsolete or outdated data. This strategy reduces the TCO and improves overall data quality

Step 4 – Migration

During downtime, which happens usually during the weekend, pre-selected data are taken from source system(s) and are moved directly into new S/4HANA structures in an adjusted format. Migration is done on the database level, therefore additional transformation of data (e.g. rename) is also possible.

Step 5 – Evaluation

After all data is moved, we run evaluation supported by the automatic kit tool called KATE, which dramatically reduces the overall effort.

We understand that each customer is unique and therefore we also tailor solution accordingly. Agility of our solution for transition to SAP S/4HANA relies on the following main principles:

  • Merging your systems, you can use multiple sources for migration
  • Possibility of selective migration by taking organizational units based on customer preferences
  • Harmonize your data using transformation rules on the fly (Rename, Merge, …)
  • Automated testing to prove system suitability
  • Reduced downtime due to skipping the upgrade phase and migration to HANA DB phase
  • Possible to achieve near zero downtime
  • Reduced effort with S/4HANA Readiness check by Datavard

 

For information on the recommended and supported operating system for SAP HANA, visit SUSE website >>

Refactoring Your App with Microservices

Thursday, 1 June, 2017

So you’ve decided to use microservices. To help implement them, you may
have already started refactoring your app. Or perhaps refactoring is
still on your to-do list. In either case, if this is your first major
experience with refactoring, at some point, you and your team will come
face-to-face with the very large and very obvious question: How do you
refactor an app for microservices? That’s the question we’ll be
considering in this post.

Refactoring Fundamentals

Before discussing the how part of refactoring into microservices, it
is important to step back and take a closer look at the what and
when of microservices. There are two overall points that can have a
major impact on any microservice refactoring strategy. Refactoring =
Redesigning
A
business guide to effective container
management –
Refactoring a monolithic application into microservices and designing a
microservice-based application from the ground up are fundamentally
different activities. You might be tempted (particularly when faced with
an old and sprawling application which carries a heavy burden of
technical debt from patched-in revisions and tacked-on additions) to
toss out the old application, draw up a fresh set of requirements, and
create a new application from scratch, working directly at the
microservices level. As Martin Fowler suggests in this
post
, however,
designing a new application at the microservices level may not be a good
idea at all. One of the key takeaway points from Fowler’s analysis is
that starting with an existing monolithic application can actually work
to your advantage when moving to microservice-based architecture. With
an existing monolithic application, you are likely to have a clear
picture of how the various components work together, and how the
application functions as a whole. Perhaps surprisingly, starting with a
working monolithic application can also give you greater insight into
the boundaries between microservices. By examining the way that they
work together, you can more easily see where one microservice can
naturally be separated from another. Refactoring isn’t generic
There is no one-method-fits-all approach to refactoring. The design
choices that you make, all the way from overall architecture down to
code-level, should take into account the application’s function, its
operating conditions, and such factors as the development platform and
the programming language. You may, for example, need to consider code
packaging—If you are working in Java, this might involve moving from
large Enterprise Application Archive (EAR) files, (each of which may
contain several Web Application Archive (WAR) packages) into separate
WAR files.

General Refactoring Strategies

Now that we’ve covered the high-level considerations, let’s take a look
at implementation strategies for refactoring. For the refactoring of an
existing monolithic application, there are three basic approaches.

Incremental

With this strategy, you refactor your application piece-by-piece, over
time, with the pieces typically being large-scale services or related
groups of services. To do this successfully, you first need to identify
the natural large-scale boundaries within your application, then target
the units defined by those boundaries for refactoring, one unit at a
time. You would continue to move each large section into microservices,
until eventually nothing remained of the original application.

Large-to-Small

The large-to-small strategy is in many ways a variation on the basic
theme of incremental refactoring. With large-to-small refactoring,
however, you first refactor the application into separate, large-scale,
“coarse-grained” (to use Fowler’s term) chunks, then gradually break
them down into smaller units, until the entire application has been
refactored into true microservices.

The main advantages of this strategy are that it allows you to stabilize
the interactions between the refactored units before breaking them down
to the next level, and gives you a clearer view into the boundaries
of—and interactions between—lower-level services before you start
the next round of refactoring.

Wholesale Replacement

With wholesale replacement, you refactor the entire application
essentially at once, going directly from a monolith to a set of
microservices. The advantage is that it allows you to do a full
redesign, from top-level architecture on down, in preparation for
refactoring. While this strategy is not the same as
microservices-from-scratch, it does carry with it some of the same
risks, particularly if it involves extensive redesign.

Basic Steps in Refactoring

What, then, are the basic steps in refactoring a monolithic application
into microservices? There are several ways to break the process down,
but the following five steps are (or should be) common to most
refactoring projects.

**(1) Preparation: **Much of what we have covered so far is preparation.
The key point to keep in mind is that before you refactor an existing
monolithic application, the large-scale architecture and the
functionality that you want to carry over to the refactored,
microservice-based version should already be in place. Trying to fix a
dysfunctional application while you are refactoring it will only make
both jobs harder.

**(2) Design: Microservice Domains: **Below the level of large-scale,
application-wide architecture, you do need to make (and apply) some
design decisions before refactoring. In particular, you need to look at
the style of microservice organization which is best suited to your
application. The most natural way to organize microservices is into
domains, typically based on common functionality, use, or resource
access:

  • Functional Domains. Microservices within the same functional
    domain perform a related set of functions, or have a related set of
    responsibilities. Shopping cart and checkout services, for example,
    could be included in the same functional domain, while inventory
    management services would occupy another domain.
  • Use-based Domains. If you break your microservices down by use,
    each domain would be centered around a use case, or more often, a
    set of interconnected use cases. Use cases are typically centered
    around a related group of actions taken by a user (either a person
    or another application), such as selecting items for purchase, or
    entering payment information.
  • Resource-based Domains. Microservices which access a related
    group of resources (such as a database, storage, or external
    devices) can also form distinct domains. These microservices would
    typically handle interaction with those resources for all other
    domains and services.

Note that all three styles of organization may be present in a given
application. If there is an overall rule at all for applying them, it is
simply that you should apply them when and where they best fit.

(3) Design: Infrastructure and Deployment

This is an important step, but one that is easy to treat as an
afterthought. You are turning an application into what will be a very
dynamic swarm of microservices, typically in containers or virtual
machines, and deployed, orchestrated, and monitored by an infrastructure
which may consist of several applications working together. This
infrastructure is part of your application’s architecture; it may (and
probably will) take over some responsibilities which were previously
handled by high-level architecture in the monolithic application.

(4) Refactor

This is the point where you actually refactor the application code into
microservices. Identify microservice boundaries, identify each
microservice candidate’s dependencies, make any necessary changes at
the level of code and unit architecture so that they can stand as
separate microservices, and encapsulate each one in a container or VM.
It won’t be a trouble-free process, because reworking code at the scale
of a major application never is, but with sufficient preparation, the
problems that you do encounter are more likely to be confined to
existing code issues.

(5) Test

When you test, you need to look for problems at the level of
microservices and microservice interaction, at the level of
infrastructure (including container/VM deployment and resource use), and
at the overall application level. With a microservice-based application,
all of these are important, and each is likely to require its own set of
testing/monitoring tools and resources. When you detect a problem, it is
important to understand at what level that problem should be handled.

Conclusion

Refactoring for microservices may require some work, but it doesn’t
need to be difficult. As long as you approach the challenge with good
preparation and a clear understanding of the issues involved, you can
refactor effectively by making your app microservices-friendly without
redesigning it from the ground up.

Tags: Category: Uncategorized Comments closed

New Machine Driver from cloud.ca!

Wednesday, 24 May, 2017

Cloud.ca machine
driverOne of the great
benefits of the Rancher container
management platform is that it runs on any infrastructure. While it’s
possible to add any Linux machine as a host using our custom setup
option, using one of the machine drivers in Rancher makes it especially
easy to add and manage your infrastructure.

Today, we’re pleased to
have a new machine driver available in Rancher, from our friends at
cloud.ca. cloud.ca is a regional cloud IaaS for
Canadian or foreign businesses requiring that all or some of their data
remain in Canada, for reasons of compliance, performance, privacy or
cost. The platform works as a standalone IaaS and can be combined with
hybrid or multi-cloud services, allowing a mix of private cloud and
other public cloud infrastructures such as Amazon Web Services. Having
the cloud.ca driver available within Rancher makes it that much easier
for our collective users to focus on building and running their
applications, while minding data compliance requirements. To access the
cloud.ca machine driver, navigate to the “Add Hosts” screen within
Rancher, select “Manage available machine drivers“. Click the arrow to
activate the driver; it’ll be easily available for subsequent
deployments. cloud.ca Click the > arrow to activate the
cloud.ca machine driver You can learn more about using the
driver and Rancher together on the** cloud.ca
blog
**.
If you’re headed to Devops Days
Toronto
(May
25-26) as well, we encourage you to visit the cloud.ca booth, where you
can see a demo in person! And as always, we’re happy to hear from
members of our community on how they’re using Rancher. Reach out to us
any time on our forums, or on Twitter
@Rancher_Labs!

Tags: , Category: Uncategorized Comments closed

Simplifying HPC System Software at Scale with Intel and SUSE – [Webinar]

Wednesday, 24 May, 2017

The internet age has delivered new, constantly flowing data streams that challenges the scientific community to keep up with compute capabilities to crunch and analyze at web-scale. As data-sets continue to grow and the notion of data science expanding beyond just the science community into new and uncharted commercial territory – the industry has found innovative ways to create HPC capabilities that scale without adding additional complexity.

The traditional approach to customizing system architecture and hardware dependent on application and end-user requirements is great when time and resources are abundant – but the business and research group of today is consistently looking for a faster time-to-value. Providing an integrated and validated stack for HPC system software can reduce complexity without sacrificing performance at scale.

Community Driven Innovation

As active participants in the OpenHPC community, SUSE and Intel are able to implement community-driven innovation that promotes component interoperability, system stability, and high scalability. This results in a reduced requirement for both broad and deep expertise in HPC systems integration and enhances accessibility and availability of powerful solutions that are validated as a complete and scalable software stack.

OpenHPC includes components for scalable provisioning, enabling HPC systems to manage the software configuration on tens to thousands of compute nodes; system health monitoring, keeping track of the hardware and software health of the compute nodes; and workload management, enabling the effective and efficient use of the system’s compute resources via on-demand and batch workflows. Intel® HPC Orchestrator is a combination of components from OpenHPC with SUSE Linux Enterprise Server that eases the path to exascale computing for businesses across a variety of industries.

Focus on Faster Results

Enable your team to focus on delivering faster results to more complex challenges with Intel® HPC Orchestrator. Manipulate massive data sets with an integrated hardware/software solution that’s powered by SUSE Linux Enterprise Server for HPC, the Linux that 97% of the fastest supercomputers in the world rely on.

Join this webinar for an overview of how this joint solution brings:

  • Fully integrated stack for a quicker time to innovation
  • Jointly supported so you can focus on what matters most
  • Incremental innovation in collaboration with the OpenHPC community

Register for the Webinar Today!