Już 10 tysięcy użytkowników SAP Business One korzysta z systemów SUSE!

Wednesday, 17 June, 2020

Liczba klientów korzystających z systemów SUSE Linux Enterprise Server w celu uruchamiania aplikacji SAP Business One w wersji dla SAP HANA osiągnęła już 10 tysięcy. Jeszcze dwa lata było to “zaledwie” 6 tysięcy”. Jak łatwo policzyć, mamy aż 67% wzrost w ciągu zaledwie dwóch lat!

SAP Business One działający na systemie SUSE Linux Enterprise Server to zintegrowane rozwiązanie, które wspomaga prowadzenie biznesu przez małe i średnie przedsiębiorstwa. Zapewnia im wgląd i kontrolę nad wieloma aspektami działalności. Przechwytuje najważniejsze informacje związane z działalnością firmy i zapewnia natychmiastowy do niej dostęp, pomagając w ten sposób usprawniać procesy biznesowe i generować dodatkowy wzrost (zobacz opisy wdrożeń).

Chociaż SAP Business One to rozwiązanie dla mniejszych firm, Ci klienci doskonale zdają sobie sprawę, że silne partnerstwo SUSE z SAP jest dla nich gwarancją jakości i niezawodności działania oraz dostępności lepszych usług dla użytkowników. Uruchamiając swoje oprogramowanie SAP dla małych i średnich firm na platformie SUSE Linux zdejmują sobie z głowy zadania administracyjne i mogą skupić się po prostu na prowadzonej przez siebie działalności biznesowej.

SAP Business One zwykle pomagają klientom wdrożyć partnerzy SAP. Im z kolei SUSE dostarcza wstępnie skonfigurowane obrazy oprogramowania, a w ramach Akademii SUSE – szkolenia techniczne z systemu Linux. Partnerzy oferujący SAP Business One mają dostęp do Akademii poprzez stronę partner.suse.com (wymagana bezpłatna rejestracja). Inne korzyści dla resellerów wynikające z udziału w programie partnerskim SUSE to możliwość uzyskania dodatkowych rabatów dla zarejestrowanych transakcji, dostęp do narzędzi graficznych wspomagających instalację systemów SUSE i automatyzujących wiele czynności związanych z wdrożeniem SAP Business One w wersji dla SAP HANA na platformie SUSE Linux Enterprise.

SUSE przygotowała nowe obrazy do instalacji SAP Business One na platformie SUSE Linux Enterprise Server. Są one już dostępne do pobrania pod adresem www.suse.com/slesb1hana. Więcej informacji na temat współpracy firm SUSE i SAP można znaleźć na stronie www.suse.com/sap.

 

Accelerating Machine Learning with MLOps and FuseML: Part One

Sunday, 25 July, 2021

Building successful machine learning (ML) production systems requires a specialized re-interpretation of the traditional DevOps culture and methodologies. MLOps, short for machine learning operations, is a relatively new engineering discipline and a set of practices meant to improve the collaboration and communication between the various roles and teams that together manage the end-to-end lifecycle of machine learning projects.

Helping enterprises adapt and succeed with open source is one of SUSE’s key strengths. At SUSE, we have the experience to understand the difficulties posed by adopting disruptive technologies and accelerating digital transformation. Machine learning and MLOps are no different.

The SUSE AI/ML team has recently launched FuseML, an open source orchestration framework for MLOps. FuseML brings a novel holistic interpretation of MLOps advocated practices to help organizations reshape the lifecycle of their Machine Learning projects. It facilitates frictionless interaction between all roles involved in machine learning development while avoiding massive operational changes and vendor lock-in.

This is the first in a series of articles that provides a gradual introduction to machine learning, MLOps and the FuseML project. We start here by rediscovering some basic facts about machine learning and why it is a fundamentally atypical technology. In the next articles, we will look at some of the key MLOps findings and recommendations and how we interpret and incorporate them into the FuseML project principles.

MLOps Overview

Old habits that need changing can be difficult to unlearn, even more difficult than re-learning everything. It’s true for people, and it’s even truer for teams and organizations where the combined inertia that makes important changes difficult to implement is several orders of magnitude greater.

With the AI hype on the rise, organizations have been investing more and more in machine learning to make better and faster business decisions or automate key aspects of their operations and production processes. But if history taught us anything about adopting disruptive software technologies like virtualization, containerization and cloud computing, it’s that getting results doesn’t happen overnight. It often requires significant operational and cultural changes. With machine learning, this challenge is very pronounced, with more than 80 percent of AI projects failing to deliver business outcomes, as reported by Gartner in 2019 and repeatedly confirmed by business analysts and industry leaders throughout 2020 and 2021.

Naturally, following this realization about the challenges of using machine learning in production, a lot of effort went into investigating the “whys” and “whats” about this state of affairs. Today, the main causes of this phenomenon are better understood. A brand new engineering discipline – MLOps – was created to tackle the specific problems that machine learning systems encounter in production.

The recommendations and best practices assembled under the MLOps label are rooted in the recognition that machine learning systems have specialized requirements that demand changes in the development and operational project lifecycle and organizational culture. MLOps doesn’t propose to reinvent how we do DevOps with software projects. It’s still DevOps but pragmatically applied to machine learning.

MLOps ideas can be traced back to the defining characteristics of machine learning. The remainder of this article is focused on revisiting what differentiates machine learning from conventional programming. We’ll use the fundamental insights in this exercise as stepping stones when we dive deeper into MLOps in the next chapter of this series.

Machine Learning Characteristics

Solving a problem with traditional programming requires a human agent to formulate a solution, usually in the form of one or more algorithms, and then translate it into a set of explicit instructions that the computer can execute efficiently and reliably. Generally speaking, conventional programs, when correctly developed, are expected to give accurate results and to have highly predictable and easily reproducible behaviors. When a program produces an erroneous result, we treat that as a defect that needs to be reproduced and fixed. As a best practice, we also process conventional software through as much testing as possible before deploying it in production, where the business cost incurred for a defect could be substantial. We rely on the results of proactive testing to give us some guarantees about how the program will behave in the future, another characteristic derived from the predictability aspect of conventional software. As a result, once released, a software product is expected to take significantly less effort to maintain compared to development.

Some of these statements are highly generic. One might say they could even be used to describe products in general, software or otherwise. They all have in common that they no longer hold as entirely valid when applied to machine learning.

Machine learning algorithms are distinguished by their ability to learn from experience (i.e., from patterns in input data) to behave in a desired way, rather than being programmed to do so through explicit instructions. Human interaction is only required during the so-called training phase when the ML algorithm is carefully calibrated and data is fed into it, resulting in a trained program, also called an ML model. With proper automation in place, it may even seem that human interaction could be eliminated. Still, as we’ll see later in this post, it’s just that the human responsibilities shift from programming to other activities, such as data collection and processing and ML algorithm selection, tuning and monitoring.

Machine learning can be used to solve a specific class of problems:

  • the problem is extremely difficult to solve mathematically or programmatically, or it has only solutions that are too computationally expensive to be practical
  • a fair amount of data exists (or can be generated) containing a pattern that an ML algorithm can learn

Let’s look at two examples, similar but situated at opposite ends of the spectrum as far as utility is concerned.

Sum of Two Numbers

A very simple example, albeit with no practical application whatsoever, is training an ML model to calculate the sum of two real numbers. Doing this with conventional programming is trivial and always yields very accurate results.

Training and using an ML model for the same task could be summarized by the following phases:

Data Preparation

First, we need to prepare the input data that will be used to train the ML model. Generally speaking, training data is structured as a set of entries. Each entry associates a concrete set of values used as input for the target problem with the correct answer (sometimes known as a target or label in ML terms). In our example, each entry maps a pair of real input values (X, Y) to the desired result (X+Y) that we expect the model to learn to compute. For this purpose, we can generate the training data entirely using conventional programming. Still, it’s often the case with machine learning that training data is not readily available and expensive to acquire and prepare. The code used to generate the input dataset could look like this:

import numpy as np 
train_data = np.array([[1.0,1.0]])
train_targets = np.array([2.0])
for i in range(3,10000,2):
  train_data = np.append(train_data,[[i,i]],axis=0)
  train_targets = np.append(train_targets,[i+i])

Deciding what kind of data is needed, how much of it and how it needs to be structured and labeled to yield acceptable results during ML training is the realm of data science. The data collection and preparation phase is critical to ensuring the success of ML projects. It takes experimentation and experience to find out which approach yields the best result, and data scientists often need to iterate several times through this phase and improve the quality of their training data to raise the accuracy of ML models.

Model Training

Next, we need to define the ML algorithm and train it (also known as fitting) on the input data. For our goal, we can use an Artificial Neural Network (ANN) suitable for this type of problem (regression). The code for it could look like this:

import tensorflow as tf
from tensorflow import keras
import numpy as np


model = keras.Sequential([
  keras.layers.Flatten(input_shape=(2,)),
  keras.layers.Dense(20, activation=tf.nn.relu),
  keras.layers.Dense(20, activation=tf.nn.relu),
  keras.layers.Dense(1)
])


model.compile(optimizer='adam', 
  loss='mse',
  metrics=['mae'])


model.fit(train_data, train_targets, epochs=10, batch_size=1)

Similar to data preparation, deciding which ML algorithm to use and what values should be configured for its parameters for best results (e.g., the neural network architecture, optimizer, loss, epochs) requires specific ML knowledge and iterative experimentation. However, by now, ML is mature enough to make finding an algorithm that fits the problem not difficult, especially given that there are countless open source libraries, examples, ready-to-use ML models and documented use-case patterns and recipes available for all major classes of problems that can be solved with ML, that one can start from. Moreover, many of the decisions and activities required to develop a high-performing ML model (e.g., hyper-parameter tuning, neural architecture search) can already be fully automated or accelerated through partial automation through a special category of tools called AutoML.

Model Prediction

We now have a trained ML model that we can use to calculate the sum of any two numbers (i.e. make predictions):

def sum(x, y):
  s = model.predict([[x, y]])[0][0]
  print("%f + %f = %f" % (x, y, s))

The first thing to note is that the summation results produced by the trained model are not at all accurate. It’s fair to say that the ML model is not behaving like it’s calculating the result, but more like it’s giving a ballpark estimation of what the result might be, as shown in this set of examples:

# sum(2000, 3000)
2000.000000 + 3000.000000 = 4857.666992
# sum(4, 5)
4.000000 + 5.000000 = 9.347977

Another notable characteristic is, as we move further away from the pattern of values on which the model was trained, the model’s predictions get worse. In other words, the model is better at estimating summation results for input values that are more similar to the examples on which it was trained:

# sum(10, 10000)
10.000000 + 10000.000000 = 8958.944336
# sum(1000000, 4)
1000000.000000 + 4.000000 = 1318969.375000
# sum(4, 1000000)
4.000000 + 1000000.000000 = 895098.750000
# sum(0.1, 0.1)
0.100000 + 0.100000 = 0.724608
# sum(0.01, 0.01)
0.010000 + 0.010000 = 0.549576

This phenomenon is well known to ML engineers. If not properly understood and addressed, it can lead to ML specific problems that take various forms and names:

  • bias: using incomplete, faulty or prejudicial data to train ML models that end up producing biased results
  • training-serving skew: training an ML model on a dataset that is not representative of the real-world conditions in which the ML model will be used
  • data drift, concept drift or model decay: the degradation, in time, of the model quality, as the real-world data used for predictions changes to the point where the initial assumptions on which the ML model was trained are no longer valid

In our case, it’s easy to see that the model is performing poorly due to a skew situation: we inadvertently trained the model on pairs of equal numbers, which is not representative of the real-world conditions in which we want to use it. Our model also completely missed the point that addition is commutative, but that’s not surprising, given that we didn’t use training data representative of this property either.

When developing ML models to solve complex, real-world problems, detecting and fixing this type of problem is rarely that simple. Machine learning is as much an art as it is a science and engineering endeavor.

In training ML models, there is usually also a validation step involved, where the labeled input data is split, and part of it is used to test the trained model and calculate its accuracy. This step is intentionally omitted here for the sake of simplicity. The full exercise of implementing this example, with complete code and detailed explanations, is covered in this article.

The Three-Body Problem

At the other end of the spectrum is a physics (classical mechanics) problem that has inspired one of the greatest mathematicians of all times, Isaac Newton, to invent an entirely new branch of math and nowadays a source of constant frustration among high school students: Calculus.

Finding the solution to the set of equations that describe the motion of two celestial bodies (e.g., the Earth and the Moon) given their initial positions and velocities is already a complicated problem. Extending the problem to include a third body (e.g., the Sun) complicates things to the point where a solution cannot be found, and the entire system starts behaving chaotically. With no mathematical solution in sight, Newton himself felt that supernatural powers had to be at play to account for the apparent stability of our solar system.

This problem and its generalized form, the many-body problem, are so famous because solving them is a fundamental part of space travel, space exploration, cosmology and astrophysics. Partial solutions can be calculated using analytical and numerical methods, but it requires immense computational power.

All life forms on this planet are constantly used to dealing with gravity. We are well equipped to learn from experience, and we’re able to make pretty accurate predictions regarding its effects on our bodies and the objects we interact with. It is not entirely surprising that Machine Learning can estimate the motion of objects under the effect of gravity.

Using Machine Learning, researchers at the University of Edinburgh have been able to train an ML model capable of solving the three-body problem 100 million times faster than traditional means. The full story covering this achievement is available here, and the original scientific paper can be read here.

Solving the three-body problem with ML is similar to our earlier trivial example of adding two numbers together. The training and validation datasets are also generated through simulation, and an ANN is also involved here, albeit one with a more complex structure. The main differences are the complexity of the problem and ML’s immediate practical application to this use case. However, the observations previously stated about general ML characteristics apply equally to both cases, regardless of complexity and utility.

Conclusion

We haven’t even begun to look at MLOps in detail. Still, we can already identify and summarize key takeaways representative of ML in general just by comparing classical programming to Machine Learning:

  1. Not all problems are good candidates for machine learning
  2. The process of developing ML models is iterative, exploratory and experimental
  3. Developing a machine learning system requires dealing with new categories of artifacts with specialized behaviors that don’t fit the patterns of conventional software
  4. It’s usually not possible to produce fully accurate results with ML models
  5. Developing and working with machine learning based systems requires a specialized set of skills, in addition to those needed for traditional software engineering
  6. Running ML systems in the real world is far less predictable than what we’re used to with regular software
  7. Finally, developing ML systems would be next to impossible without specialized tools

Machine Learning characteristics summarized here are reflected in the MLOps discipline and distilled in the principles on which we based the FuseML orchestration framework project. The next article will give a detailed account of MLOps recommendations and how an MLOps orchestration framework like FuseML can make developing and operating ML systems an automated and frictionless experience.

Category: Featured Content, Rancher Kubernetes Comments closed

The Business Case for Container Adoption

Tuesday, 2 April, 2019

Developers often believe that demonstrating the need for an IT-based solution should be very easy. They should be able to point to the business problem that needs a solution, briefly explain what technology should be selected, and the funds, staff, and computer resources will be provided by the organization. Unfortunately, this is seldom the actual process that is followed.

Developing a Business Case for New Technology Isn’t Always Easy

Most organizations require that both a business and a technical case be made before a project can be approved. Depending on the size and culture of the organization, building both cases can be a long, and sometimes arduous, process.

Part of the challenge developers face can be summed up simply: business decision-makers and technical decision-makers have different priorities, use different metrics, and, in short, think differently.

Business Managers Think in Different Terms Than Developers

Business decision-makers are almost always thinking in terms of the investment required, the costs expected, and the revenues the organization can expect that can be attributed to the successful completion of the project not the technical merit, the tools selected, or the development methodology that will be deployed to complete the project.

They may use technology every day, but many think of it as a means to an end, not something they enjoy using.

As David Ingram pointed out in his recent article on business decision making, managers often use a 7-step process:

  1. Identify the problem
  2. Seek information to clarify what’s actually happening
  3. Brainstorm potential solutions
  4. Weigh the alternatives
  5. Choose an alternative
  6. Implement the chosen plan
  7. Evaluate the outcome

You’ll note that the best technology, the best approach to development, the best platform, how to achieve the best performance, how to achieve the highest levels of availability, and other technical factors that technologists consider may be seen as secondary issues. From the perspective of a business decision-maker, the extensive work that constitutes this type of evaluation might all be wrapped up into the “weigh the alternatives” step.

Factors of the Business Decision

Let’s break this down a bit. Business decision-makers will consider the *overall** investment required and weigh it against the potential benefits that might be received. This includes a number of factors that may not appear to be directly associated with a specific project.

They also will be considering if this the right project to be addressing at this time or whether other issues are more pressing.

While working with an executive at a major IT supplier, I was once told “solving the wrong problem, no matter how efficiently and well-done, is still solving the wrong problem.”

Here are a few of the factors they are likely to consider:

  • Staff: the number of staff, the levels of expertise, the amount of time they’ll need to be assigned to the project, the business overhead associated with having those people on staff, whether they should be full-time, part-time, or contractors
  • Costs: the costs of all resources required, including:
    • Data center operational costs: floor space, power, air conditioning, networking, maintenance, real estate
    • Systems: number of systems, memory required, external storage, maintenance
    • Software: software licenses, software maintenance
  • Time to market: can this project be completed quickly enough to address the needs of the market. This sometimes is called “time to profit.”
  • Revenues: will the project directly or indirectly lead to increased revenues?

If the costs of doing the project outweigh the projected revenues that can be attributed to the completion of the project, the business decision-makers are likely to look for another solution which may include not doing it at all, purchasing a packaged software product that will solve the problem in a general way, or subscribing to an online service that will address the issue.

In the end, business decision-makers will be focused on increasing the organization’s revenues and decreasing its costs.

What Developers Think About

Developers, on the other hand, tend to think more about the technical problem in front of them and how it can be solved.

What Needs to Be Accomplished

Often, a developer’s first consideration is to fully understand what needs to be accomplished to address the situation. It is quite possible that the developers will be unable to focus on the issues in a way that takes into account the needs of the whole organization. This siloed perspective sometimes results in several business units solving the same problem in different, and sometimes incompatible, ways.

How It Can Be Accomplished

The next consideration for developers is how a solution can be accomplished. Developers are very busy people and need to get things done quickly and efficiently. This often means that they select development tools and methodology they are most familiar with rather than casting about to discover new, and potentially better, approaches. The result is that, from an outsider’s perspective, developers will select the same tool regardless of if it is the best one for the job. As Abraham Maslow pointed out, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (“The Psychology of Science” 1962).

How To Systematize or Automate Solutions

Developers have a tendency to also focus on how to systematize or automate the approach to accomplishing a solution. Developers who have experience introducing new systems will not only consider how to accomplish this difficult task, but also whether the current manual processes have some merit as well.

Costs Are Often Ignored or Secondary to Other Considerations

Developers often do not have access to reports showing the overall costs, the investment required, or even the revenues of a given project. Since they are busy working on projects, they often don’t think about those factors at all. This situation, by the way, is the root of many communication challenges faced when developers are attempting to persuade business decision-makers to approve a project. They don’t have all of the data they need.

I’m reminded of a conversation with a CFO of my company who didn’t understand the need for a different type of database than the one used by the company for another purpose. At first I thought of him a “a man who knows the price of everything and the value of nothing,” to quote Oscar Wilde.

After thinking about his comments, I built a different justification that focused on speaking to him in his own language by discussing the project in terms of the investment required, the costs that were going to be incurred, and the revenue potential the new approach would provide. It took some work to obtain that information, but it was worth the effort in the end.

It was only after a longer conversation with the CFO that he began to be able to understand why Lotus Notes wasn’t the best tool for the creation of a transaction-oriented system for research and analysis.

Are you speaking to your business decision-makers using acronyms, development procedures and the names of open source projects you’d like to deploy? If so, you’re not helping your cause.

Where to Start

A good place to start is to think in terms of where and how money can be saved, where and how previous investments can be enhanced or reused rather than being discarded, and how your proposed project would result in increased opportunities for revenue.

It would also be wise to offer a vision of how the use of containers will help the organization achieve its overall goals, including factors such as:

  • Scaling to address the needs of a larger or at least a new market
  • Reducing overall IT costs
  • Allow the organization to rapidly adapt to a rapidly changing environment, to take advantage of emerging opportunities
  • Quickly develop new products or services
  • Reach new customers while being able to maintain relationships with today’s customer base

For Many Companies Adoption of Containers Must Be Carefully Justified

The move to a Container-based environment is one of those journeys that developers can easily understand as beneficial that can be challenging to justify to a business decision-maker.

After all, some things aren’t fully known until they’ve been done at least once. So, quantifying investments required, cost savings that will be realized, and the actual size of revenue increases can be difficult.

What can be said is that adopting Containers can reduce costs and reduce risk by supporting rapid and inexpensive prototyping of solutions. Pointing out that doing this prototyping in inexpensive cloud computing services rather than acquiring new systems would help them understand that you are focused on meeting your objects while still helping the organization keep costs under control. Tell the business decision makers that this approach also offers them a choice in the future. Once something is developed, documented, and proven to be able to do the job, it can either stay where it is or be moved in-house depending upon which will be the best overall business decision.

Where Can Containers Help a Company Reduce Costs?

Developers understand that being able to decompose a problem into smaller, more manageable problems can improve their efficiency, reduce their time-to-solution, and make reuse of code and services easier.

Reducing the Number of Operating System Instances to Maintain

Explain that containerized applications need fewer copies of operating systems when compared to using virtual machine technology, less processor power, less system memory, and less external storage. Developers can to speak in terms of reducing system requirements and how they can result in a direct savings that the business decision-makers can appreciate.

A few related factors are helpful to bring up as well. This approach reduces the number of software licenses that are required and the cost of software maintenance agreements.

Increasing the Amount of Useful Work Systems Can Accomplish

Since the systems won’t be carrying the heavy weight of unneeded operating systems for each application component or service, performance should be improved. After all, switching from one container to another is much faster than switching from one VM to another. There is no need to roll huge images into and out of storage.

Improving Productivity

Since productivity is important to most organizations, show that a move to containers is a great foundation for the use of a rapid application development and deployment (DevOps) strategy. By decomposing applications into functions, application development can be faster because functions are easier to build, document, and support. This should result in lower development costs while improving overall time to solution.

This approach also can reduce the time to deployment because functions can be developed in parallel by smaller independent teams.

Improving Application Capabilities

Adopting a container-based approach provides a number of other benefits that should be mentioned as well, including:

  • Container management and automation functions are improving all the time which should result in lower costs of administration and operations
  • Container workload management and migration technology is also improving all the time which should result in higher levels of application availability, higher levels of performance, and fewer losses due to downtime
  • Decomposing applications into independent functions and services also makes them easier to develop and maintain which should reduce the costs of development, support, and operations

Facilitating a Move to the Cloud

Most business decision-makers have read about cloud computing, but don’t really understand how it can be adopted. Help them understand that the adoption of containers can facilitate the organization’s ability to deploy functions or complete applications locally, in the cloud, or in a combined hybrid environment, quickly and easily.

So, the answer to the question of whether to move to the cloud or continue on-premise computing is “yes, both.”

Reducing Time to Profit

When the business decision-maker begins to understand the business benefits of containerization, they’ll also see that this approach not only can reduce the overall time to market for applications, but, more importantly, it can reduce the time to profit. Lower development and support costs combined with rapid development can lead to quicker streams of revenue and profit.

Establishing a Foundation for the Future

It is also helpful for the business decision-maker to understand that one of your goals is establishing a platform for the future. Containers are supported in many different computing environments, by many different suppliers, and the organization gets the benefits.

Some of those benefits are:

  • Containerized functions can be used as part of many applications without having to be rearchitected or redeveloped
  • They can be enhanced or updated as needed without requiring other unrelated functions to be changed.
  • Support of the application can be easier and less costly.
  • Scalability is improved since the same functions can be run in multiple places with the help of workload management technology

How Can Containers Help a Company Increase Revenue?

A key question to consider is how adopting Containers can help the company increase its revenues. There are a number of elements that directly and indirectly address that question.

Since applications can be developed quicker, perform better, and can be supported more easily, the organization can address a rapidly changing business and regulatory environment more effectively. This also means that the organization can capture additional market share from organizations that continue to only use older approaches to information systems.

It also means that the organization can conduct experiments and prototype solutions quickly. This means that the organization can succeed or fail quicker and that organizational learning will be accelerated.

Where an application or its components execute are flexible. This means that a successful solution can execute locally, in the cloud, or in both places as needed. Business decision-makers usually appreciate flexible solutions that don’t impose extra costs.

This approach also ensures that the resulting solutions can scale from small to large as needed. So, organizations can feel more comfortable trying out something new and know that if it succeeds, it can be put into production effectively. Business decision-makers are often encouraged by approaches that allow for a low investment at first and with opportunities for growth as revenues increase rather than forcing a heavy investment up front. This means that the organization is exposed to lever levels of risk.

Summary

Adopting a container-focused approach can be beneficial to both technical and business decision-makers because it addresses the needs for rapid and effective solution development and reduction in overall costs and risks. It also results in a foundation for future growth and the ability to address a changing market.

This approach brings greater complexity along with it, but the benefits outweigh the challenges in many environments. The rapid improvement in container system management, automation, as well as the strong industry support for this approach makes it a safer choice.

If developers focus on helping business decision-makers understand how this approach also facilitates lower costs, improved time to market, and time to profit, the business side is likely to get on board quicker. They are likely to appreciate the reduced costs of solution support, operations, and development. They are also likely to be pleased that future investment can be based on revenue production rather than facing investing up front based upon a rosy forecast for future revenues.

Developing a Strategy for Kubernetes adoption

Like containers, Kubernetes sits at the intersection of DevOps and ITOps and many organizations are trying to figure out key questions such as: who should own kubernetes, how many clusters to deploy, how to deliver it as a service, how to build a security policy, and how much standardization is critical for adoption. Rancher co-founder Shannon Williams discusses these questions and more in the free online class Building an Enterprise Kubernetes Strategy.

Tags: ,, Category: Uncategorized Comments closed

Rancher 2.2 Hits the GA Milestone

Tuesday, 26 March, 2019
Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

We released version 2.2.0 of Rancher today, and we’re beyond excited. The latest release is the culmination of almost a year’s work and brings new features to the product that will make your Kubernetes installations more stable and easier to manage.

When we released Preview 1 in December and Preview 2 in February, we
covered their features extensively in blog articles, meetups, videos,
demos, and at industry events. I won’t make this an article that
rehashes what others have already written, but in case you haven’t seen
the features we’ve packed into this release, I’ll do a quick recap.

Rancher Global DNS

There’s a telco concept of the “last mile,” which is the final
communications link between the infrastructure and the end user. If
you’re all in on Kubernetes, then you’re using tools like CI/CD or some
other automation to deploy workloads. Maybe it’s only for testing, or
maybe your teams have full control over what they deploy.

DNS is the last mile for Kubernetes applications. No one wants to deploy
an app via automation and then go manually add or change a DNS record.

Rancher Global DNS solves this by provisioning and maintaining an
external DNS record that corresponds to the IP addresses of the
Kubernetes Ingress for an application. This, by itself, isn’t a new
concept, but Rancher will also do it for applications deployed to
multiple clusters.

Imagine what this means. You can now deploy an app to as many clusters
as you want and have DNS automatically update to point to the Ingress
for that application on all of them.

Rancher Cluster BDR

This is probably my favorite feature in Rancher 2.2. I’m a huge fan of
backup and disaster recovery (BDR) solutions. I’ve seen too many things
fail, and when I know I have backups in place, failure isn’t a big deal.
It’s just a part of the job.

When Rancher spins up a cluster on cloud compute instances, vSphere, or
via the Custom option, it deploys Rancher Kubernetes Engine (RKE).
That’s the CNCF-certified Kubernetes distribution that Rancher
maintains.

Rancher 2.2 adds support for backup and restore of the etcd datastore
directly into the Rancher UI/API and the Kubernetes API. It also adds
support for S3-compatible storage as the endpoint, so you can
immediately get your backups off of the hosts without using NFS.

When the unthinkable happens, you can restore those backups directly
into the cluster via the UI.

You’ve already been making snapshots of your cluster data and moving
them offsite, right? Of course you have.…but just in case you
haven’t, it’s now so easy to do that there’s no reason not to do it.

Rancher Advanced Monitoring

Rancher has always used Prometheus for monitoring and alerts. This
release enables Prometheus to reach even further into Kubernetes and
deliver even more information back to you. One of the flagship features
in Rancher is single cluster
multi-tenancy
,
where one or more users have access to a Project and can only see the
resources within that
Project

even if there are other users or other Projects on the cluster.

Rancher Advanced Monitoring deploys Prometheus and Grafana in a way that
respects the boundaries of a multi-tenant environment. Grafana installs
with pre-built cluster and Project dashboards, so once you check the box
to activate the advanced metrics, you’ll be looking at useful graphs a
few minutes later.

Rancher Advanced Monitoring covers everything from the cluster nodes to
the Pods within each Project, and if your application exposes its own
metrics, Prometheus will scrape those and make them available for you to
use.

Multi-Cluster Applications

Rancher is built to manage multiple clusters. It has a strong
integration with Helm via the Application
Catalog
, which takes
Helm’s key/value YAML and turns it into a form that anyone can use.

In Rancher 2.2 the Application Catalog also exists at the Global level,
and you can deploy apps via Helm simultaneously to multiple Projects in
any number of clusters. This saves a tremendous amount of time for
anyone who has to maintain applications in different environments,
particularly when it’s time to upgrade all of those applications.
Rancher will batch upgrades and rollbacks using Helm’s features for
atomic releases.

Because multi-cluster apps are built on top of Helm, they’ll work out of
the box with CI/CD systems or any other automated provisioner.

Multi-Tenant Catalogs

In earlier versions of Rancher the configuration for the Application
Catalog and any external Helm repositories existed at the Global level
and propagated to the clusters. This meant that every cluster had access
to the same Helm charts, and while that worked for most installations,
it didn’t work for all of them.

Rancher 2.2 has cluster-specific and project-specific configuration for
the Application Catalog. You can remove it completely, change what a
particular cluster or project has access to, or add new Helm
repositories for applications that you’ve approved.

Conclusion

The latest version of Rancher gives you the tools that you need for “day
two” Kubernetes operations — those tasks that deal with the management
and maintenance of your clusters after launch. Everything focuses on
reliability, repeatability, and ease of use, because using Rancher is
about helping your developers accelerate innovation and drive value for
your business.

Rancher 2.2 is available now for deployment in dev and staging environments as rancher/rancher:latest. Rancher recommends that production environments hold out for rancher/rancher:stable before upgrading, and that tag will be available in the coming days.

If you haven’t yet deployed Rancher, now is a great time to start! With two easy steps you can have Rancher up and running, ready to help you manage Kubernetes.

Join the Rancher 2.2 Online Meetup on April 3rd

To kick off this release and explain in detail each of these new, powerful features, we’re hosting an Online Meetup on April 3rd. It’s free to join and there will be live Q&A with the engineers who directly worked on the project. Get your spot here.

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

Continuous Delivery of Everything with Rancher, Drone, and Terraform

Wednesday, 16 August, 2017

It’s 8:00 PM. I just deployed to production, but nothing’s working.
Oh, wait. the production Kinesis stream doesn’t exist, because the
CloudFormation template for production wasn’t updated.
Okay, fix that.
9:00 PM. Redeploy. Still broken. Oh, wait. The production config file
wasn’t updated to use the new database.
Okay, fix that. Finally, it
works, and it’s time to go home. Ever been there? How about the late
night when your provisioning scripts work for updating existing servers,
but not for creating a brand new environment? Or, a manual deployment
step missing from a task list? Or, a config file pointing to a resource
from another environment? Each of these problems stems from separating
the activity of provisioning infrastructure from that of deploying
software, whether by choice, or limitation of tools. The impact of
deploying should be to allow customers to benefit from added value or
validate a business hypothesis. In order to accomplish this,
infrastructure and software are both needed, and they normally change
together. Thus, a deployment can be defined as:

  • reconciling the infrastructure needed with the infrastructure that
    already exists; and
  • reconciling the software that we want to run with the software that
    is already running.

With Rancher, Terraform, and Drone, you can build continuous delivery
tools that let you deploy this way. Let’s look at a sample system:
This simple
architecture has a server running two microservices,
[happy-service]
and
[glad-service].
When a deployment is triggered, you want the ecosystem to match this
picture, regardless of what its current state is. Terraform is a tool
that allows you to predictably create and change infrastructure and
software. You describe individual resources, like servers and Rancher
stacks, and it will create a plan to make the world match the resources
you describe. Let’s create a Terraform configuration that creates a
Rancher environment for our production deployment:

provider "rancher" {
  api_url = "${var.rancher_url}"
}

resource "rancher_environment" "production" {
  name = "production"
  description = "Production environment"
  orchestration = "cattle"
}

resource "rancher_registration_token" "production_token" {
  environment_id = "${rancher_environment.production.id}"
  name = "production-token"
  description = "Host registration token for Production environment"
}

Terraform has the ability to preview what it’ll do before applying
changes. Let’s run terraform plan.

+ rancher_environment.production
    description:   "Production environment"
    ...

+ rancher_registration_token.production_token
    command:          "<computed>"
    ...

The pluses and green text indicate that the resource needs to be
created. Terraform knows that these resources haven’t been created yet,
so it will try to create them. Running terraform apply creates the
environment in Rancher. You can log into Rancher to see it. Now let’s
add an AWS EC2 server to the environment:

# A look up for rancheros_ami by region
variable "rancheros_amis" {
  default = {
      "ap-south-1" = "ami-3576085a"
      "eu-west-2" = "ami-4806102c"
      "eu-west-1" = "ami-64b2a802"
      "ap-northeast-2" = "ami-9d03dcf3"
      "ap-northeast-1" = "ami-8bb1a7ec"
      "sa-east-1" = "ami-ae1b71c2"
      "ca-central-1" = "ami-4fa7182b"
      "ap-southeast-1" = "ami-4f921c2c"
      "ap-southeast-2" = "ami-d64c5fb5"
      "eu-central-1" = "ami-8c52f4e3"
      "us-east-1" = "ami-067c4a10"
      "us-east-2" = "ami-b74b6ad2"
      "us-west-1" = "ami-04351964"
      "us-west-2" = "ami-bed0c7c7"
  }
  type = "map"
}


# this creates a cloud-init script that registers the server
# as a rancher agent when it starts up
resource "template_file" "user_data" {
  template = <<EOF
#cloud-config
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      for i in {1..60}
      do
      docker info && break
      sleep 1
      done
      sudo docker run -d  --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.1 $${registration_url}
EOF

  vars {
    registration_url = "${rancher_registration_token.production_token.registration_url}"
  }
}

# AWS ec2 launch configuration for a production rancher agent
resource "aws_launch_configuration" "launch_configuration" {
  provider = "aws"
  name = "rancher agent"
  image_id = "${lookup(var.rancheros_amis, var.terraform_user_region)}"
  instance_type = "t2.micro"
  key_name = "${var.key_name}"
  user_data = "${template_file.user_data.rendered}"

  security_groups = [ "${var.security_group_id}"]
  associate_public_ip_address = true
}


# Creates an autoscaling group of 1 server that will be a rancher agent
resource "aws_autoscaling_group" "autoscaling" {
  availability_zones        = ["${var.availability_zones}"]
  name                      = "Production servers"
  max_size                  = "1"
  min_size                  = "1"
  health_check_grace_period = 3600
  health_check_type         = "ELB"
  desired_capacity          = "1"
  force_delete              = true
  launch_configuration      = "${aws_launch_configuration.launch_configuration.name}"
  vpc_zone_identifier       = ["${var.subnets}"]
}

We’ll put these in the same directory as environment.tf, and run
terraform plan again:

+ aws_autoscaling_group.autoscaling
    arn:                            ""
    ...

+ aws_launch_configuration.launch_configuration
    associate_public_ip_address: "true"
    ...

+ template_file.user_data
    ...

This time, you’ll see that rancher_environment resources is missing.
That’s because it’s already created, and Rancher knows that it
doesn’t have to create it again. Run terraform apply, and after a few
minutes, you should see a server show up in Rancher. Finally, we want to
deploy the happy-service and glad-service onto this server:

resource "rancher_stack" "happy" {
  name = "happy"
  description = "A service that's always happy"
  start_on_create = true
  environment_id = "${rancher_environment.production.id}"

  docker_compose = <<EOF
    version: '2'
    services:
      happy:
        image: peloton/happy-service
        stdin_open: true
        tty: true
        ports:
            - 8000:80/tcp
        labels:
            io.rancher.container.pull_image: always
            io.rancher.scheduler.global: 'true'
            started: $STARTED
EOF

  rancher_compose = <<EOF
    version: '2'
    services:
      happy:
        start_on_create: true
EOF

  finish_upgrade = true
  environment {
    STARTED = "${timestamp()}"
  }
}

resource "rancher_stack" "glad" {
  name = "glad"
  description = "A service that's always glad"
  start_on_create = true
  environment_id = "${rancher_environment.production.id}"

  docker_compose = <<EOF
    version: '2'
    services:
      glad:
        image: peloton/glad-service
        stdin_open: true
        tty: true
        ports:
            - 8000:80/tcp
        labels:
            io.rancher.container.pull_image: always
            io.rancher.scheduler.global: 'true'
            started: $STARTED
EOF

  rancher_compose = <<EOF
    version: '2'
    services:
      glad:
        start_on_create: true
EOF

  finish_upgrade = true
  environment {
    STARTED = "${timestamp()}"
  }
}

This will create two new Rancher stacks; one for the happy service and
one for the glad service. Running terraform plan once more will show
the two Rancher stacks:

+ rancher_stack.glad
    description:              "A service that's always glad"
    ...

+ rancher_stack.happy
    description:              "A service that's always happy"
    ...

And running terraform apply will create them. Once this is done,
you’ll have your two microservices deployed onto a host automatically
on Rancher. You can hit your host on port 8000 or on port 8001 to see
the response from the services:
We’ve created each
piece of the infrastructure along the way in a piecemeal fashion. But
Terraform can easily do everything from scratch, too. Try issuing a
terraform destroy, followed by terraform apply, and the entire
system will be recreated. This is what makes deploying with Terraform
and Rancher so powerful – Terraform will reconcile the desired
infrastructure with the existing infrastructure, whether those resources
exist, don’t exist, or require modification. Using Terraform and
Rancher, you can now create the infrastructure and the software that
runs on the infrastructure together. They can be changed and versioned
together, too. In the future blog entries, we’ll look at how to
automate this process on git push with Drone. Be sure to check out the
code for the Terraform configuration are hosted on
[github].
The
[happy-service]
and
[glad-service]
are simple nginx docker containers. Bryce Covert is an engineer at
pelotech. By day, he helps teams accelerate
engineering by teaching them functional programming, stateless
microservices, and immutable infrastructure. By night, he hacks away,
creating point and click adventure games. You can find pelotech on
Twitter at @pelotechnology.

Tags: , Category: Uncategorized Comments closed

Joining as VP of Business Development

Monday, 19 June, 2017

Nick Stinemates, VP Business DevelopmentI am incredibly excited to be
joining such a talented, diverse group at Rancher Labs as Vice President
of Business Development. In this role, I’ll be building upon my
experience of developing foundational and strategic relationships based
on open source technology. This change is motivated by my desire to go
back to my roots, working with small, promising companies with
passionate teams. I joined Docker, Inc. in 2013, just as it started to
bring containers out of the shadows and empower developers to write
software with the tools of their choice, while redefining their
relationship with infrastructure. Now that Docker is available in every
cloud environment, embedded in developer tools, and integrated in
development pipelines, the focus has shifted to making it more efficient
and sustainable for business. As users look for more integrated
solutions, the complexity of interrelated services and software rises
dramatically, giving an advantage to vendors that are proactively
reaching out and collaborating with best of breed tools. This is, I
believe, one of Rancher Labs’ strengths.

The Rancher container management
platform implements a layer of infrastructure services and drivers
designed specifically to power containerized applications. Since
networking, storage, load balancer, DNS, and security services are
deployed as containers, Rancher is in a unique position to integrate
technology efficiently, holistically, and at scale. Similarly, Rancher
also makes ISV and open source applications available via
its application catalog. The public
catalog delivers more than 90 popular applications and development
tools, many of which are contributed by the Rancher community. In
addition to further developing the Rancher ecosystem via technology and
ISV partnerships, I will be working to expand the Rancher Labs Partner
Network
. We will be building a
comprehensive partner program designed to expand the company’s global
reach, increase enterprise adoption, and provide partners and customers
with tools for success. From what I can tell after my first week, I am
in the right place. I’m looking forward to becoming part of the Rancher
Labs family, and collaborating with the broader ecosystem while
developing new relationships. As for immediate plans, I am coming up to
speed as fast as I can, and spending as much time talking to as many
people in the ecosystem as possible. If you’d like to explore
opportunities to collaborate, please consider becoming a
partner
. Nick is the
Vice President of Business Development at Rancher Labs where he is
focused on defining and executing Partner strategy. Prior to joining
Rancher Labs, Nick was the Vice President of Business Development and
Technical Alliances at Docker for four years. At Docker, Nick was
responsible for creating and driving the overall partner engagement and
strategy, as well as cultivating many company-defining strategic
alliances. Nick has over 15 years’ experience participating in and
contributing to the open source ecosystem as well as 10 years in
management functions in the enterprise financial space.

Tags: , Category: Uncategorized Comments closed

Unlocking the Business Value of Docker

Tuesday, 25 April, 2017

Why Smart Container Management is Key

For anyone working in IT, the excitement around containers has been hard
to miss. According to RightScale, enterprise deployments of Docker over
doubled in 2016 with 29% of organizations using the software versus just
14% in 2015 [1]. Even more impressive, fully 67%
of organizations surveyed are either using Docker or plan to adopt it.
While many of these efforts are early stage, separate research shows
that over two thirds of organizations who try Docker report that it
meets or exceeds expectations [2], and the
average Docker deployment quintuples in size in just nine months.

Clearly, Docker is here to stay. While exciting, containers are hardly
new. They’ve existed in various forms for years. Some examples include
BSD jails, Solaris Zones, and more modern incarnations like Linux
Containers (LXC). What makes Docker (based on LXC) interesting is that
it provides the tooling necessary for users to easily package
applications along with their dependencies in a format readily portable
between environments. In other words, Docker has made containers
practical and easy to use.

Re-thinking Application Architectures

It’s not a coincidence that Docker exploded in popularity just as
application architectures were themselves changing. Driven by the
global internet, cloud, and the explosion of mobile apps, application
services are increasingly designed for internet scale. Cloud-native
applications are comprised of multiple connected components that are
resilient, horizontally scalable, and wired together via secured virtual
networks. As these distributed, modular architectures have become the
norm, Docker has emerged as a preferred way to package and deploy
application components. As Docker has matured, the emphasis has shifted
from the management of the containers themselves to the orchestration
and management of complete, ready-to-run application services. For
developers and QA teams, the potential for productivity gains are
enormous. By being able to spin up fully-assembled dev, test and QA
environments, and rapidly promote applications to production, major
sources of errors, downtime and risk can be avoided. DevOps teams
become more productive, and organizations can get to market faster with
higher quality software. With opportunities to reduce cost and improve
productivity, Docker is no longer interesting just to technologists –
it’s caught the attention of the board room as well.

New Opportunities and Challenges for the Enterprise

Done right, deploying a containerized application environment can bring
many benefits:

  • Improved developer and QA productivity
  • Reduced time-to-market
  • Enhanced competitiveness
  • Simplified IT operations
  • Improved application reliability
  • Reduced infrastructure costs

While Docker provides real opportunities for enterprise deployments, the
devil is in the details. Docker is complex, comprised of a whole
ecosystem of rapidly evolving open-source projects. The core Docker
projects are not sufficient for most deployments, and organizations
implementing Docker from open-source wrestle with a variety of
challenges including management of virtual private networks, managing
databases and object stores, securing applications and registries, and
making the environment easy enough to use that it is accessible to
non-specialists. They also are challenged by skills shortages and
finding people knowledgeable about various aspects of Docker
administration. A business guide to effective
container app management –
Compounding these challenges, orchestration technologies essential to
realizing the value of Docker are also evolving quickly. There are
multiple competing solutions, including Kubernetes, Docker Swarm and
Mesos. The same is true with private cloud management frameworks.
Because Docker environments tend to grow rapidly once deployed,
organizations are concerned about making a misstep, and finding
themselves locked into a particular technology. In the age of rapid
development and prototyping, what is a sandbox one day may be in
production the next. It is important that the platform used for
evaluation and prototyping has the capacity to scale into production.
Organizations need to retain flexibility to deploy on bare-metal, public
or private clouds, and use their choice of orchestration solutions and
value-added components. For many, the challenge is not whether to deploy
Docker, but how do so cost-effectively, quickly and in a way that
minimizes business and operational risk so the potential of the
technology can be fully realized.

Reaping the Rewards with Rancher

In a sense, the Rancher® container management platform is to Docker what
Docker is to containers: just as Docker makes it easy to package,
deploy and manage containers, Rancher software does the same for the
entire application environment and Docker ecosystem. Rancher software
simplifies the management of Docker environments helping organizations
get to value faster, reduce risk, and avoid proprietary lock-in.
Written with a
technology and business audience in mind, in a recently published
whitepaper, Unlocking the Value of Docker in the Enterprise,
Rancher Labs explores the challenges of container management and
discusses and quantifies some of the specific areas that Rancher
software can provide value to the business. To learn more about Rancher,
and understand why it has become the choice of leading organizations
deploying Docker, download the whitepaper and
learn what Rancher can do for your business.

[1]
http://assets.rightscale.com/uploads/pdfs/rightscale-2016-state-of-the-cloud-report-devops-trends.pdf
[2]
https://www.twistlock.com/2016/09/23/state-containers-industry-reports-shed-insight/

Tags: ,, Category: Rancher Blog Comments closed

SUSE niekwestionowanym numerem 1 na rynku systemów operacyjnych dla aplikacji SAP

Friday, 17 August, 2018

Systemy operacyjne SUSE Linux obsługują niemal wszystkie, bo ponad 90%, aplikacji SAP działających na SAP HANA i pod SAP S/4HANA. Ostatni komunikat SUSE to dodatkowo potwierdza pokazując silną pozycję również na rynku rozwiązań SAP dla firm średniej wielkości. 

Już blisko 6.000 klientów na całym świecie korzysta z zalet aplikacji biznesowych SAP Business One opartych na bazie danych do przetwarzania w pamięci SAP HANA i uruchomionych na systemie operacyjnym SUSE Linux Enterprise Server. Osiągnięcie tego wyniku nie byłoby możliwe bez współpracy i połączenia sił SUSE oraz ponad 500 partnerów SAP, razem z którymi w sposób wydajny i skuteczny dostarczono rozwiązania klientom. Partnerzy SAP w ramach programu partnerskiego SUSE mają do dyspozycji szkolenia technicznych z systemu Linux i bezpłatne egzaminy. Mają też dostęp do kreatora instalacji systemów SUSE – potężnego narzędzia automatyzującego wdrożenie SAP Business One i platformy bazodanowej SAP HANA na platformie systemowej SUSE Linux Enterprise Server.

Od czasu ukazania się na rynku aplikacji SAP Business One na platformie SAP HANA, wiedza na temat systemów Linux stała się niezbędna do ich wdrożenia. Integracja funkcji przetwarzania danych w pamięci z aplikacjami SAP sprawiła, że system SUSE Linux Enterprise Server stał się najbardziej zoptymalizowaną platformą do uruchamiania najnowszej wersji rozwiązania biznesowego SAP dla przedsiębiorstw średniej wielkości.  Potwierzają to klienci, jak firma ZELAN z Polski (producent dóbr konsumpcyjnych i tworzyw sztucznych), która korzysta z najnowszej wersji aplikacji SAP Business One. Wysoką wydajność pracy zapewnia baza danych przetwarzania w pamięci SAP HANA i system operacyjny SUSE. “Dzięki temu możemy zwiększyć możliwości przetwarzania danych w zastosowaniach o kluczowym znaczeniu dla naszej działalności. Podnieśliśmy poziom stabilności działania naszej firmy. Co więcej, od czasu przejścia na systemy SUSE mniej czasu w ZELAN poświęcamy na zarządzanie oprogramowaniem i sprzętem. Jesteśmy też w stanie oszacować oszczędności na zużyciu energii. Wszystkie te elementy przynoszą firmie ZELAN wymierne korzyści finansowe i operacyjne” -twierdzi Paweł Nowak, kierownik operacyjny w firmie ZELAN.

Jeśli Twoja firma korzysta z oprogramowania SAP Business One, najnowsze wersje gotowych obrazów oprogramowania do instalacji SAP Business One na platformie SUSE Linux Enterprise Server 12 SP3 można pobrać ze strony suse.com/slesb1hana.

Nowości SUSE w systemach Linux i do zarządzania wirtualizacją, środowiskami cloud-native i platformami edge

Wednesday, 19 June, 2024

Na SUSECON 2024 w Berlinie SUSE ogłosiła wprowadzenie nowych funkcjonalności w swojej ofercie infrastruktury IT dla przedsiębiorstw opartej na systemach Linux, technologiach cloud-native i rozwiązaniach brzegowych. Pomogą one klientom uwolnić nieskończony potencjał oprogramowania open source wykorzystywanego do wspierania działalności przedsiębiorstw.

Generatywna sztuczna inteligencja, technologie cloud native i edge na nowo definiują sposoby, w jaki przedsiębiorstwa tworzą i obsługują aplikacje w chmurze, środowiska brzegowe i te działające lokalnie. Nowe możliwości oprogramowania SUSE wspierają organizacje w procesach transformacji ich działalności, zapewniając szybszy czas uzyskania wartości i niższe koszty operacyjne.

„Możliwość dokonywania różnych wyborów w dzisiejszym złożonym środowisku IT jest niezwykle istotna. Jednocześnie niedawna konsolidacja rynku pozbawia wielu użytkowników swobody w doborze rozwiązań” – powiedział dr Thomas Di Giacomo, dyrektor ds. technologii i produktów w firmie SUSE. „Dzięki naszemu unikalnemu, otwartemu i zorientowanemu na ekosystem podejściu, SUSE podtrzymuje swoje wieloletnie zobowiązanie do zapewniania klientom elastyczności, której potrzebują w swojej infrastrukturze w centrum danych, aby osiągać jak najlepsze wyniki”.

 

Rozwiązania Business Critical Linux (BCL)

Systemy Linux obsługują obecnie najważniejsze aplikacje przedsiębiorstw. SUSE Linux Enterprise od zawsze był preferowanym systemem dla organizacji poszukujących elastyczności, niezawodności i swobody wyboru w kształtowaniu swojej infrastruktury IT. SUSE niezmiennie wprowadza innowacje i pomaga zwiększać wydajność przedsiębiorstw dostarczający im systemy Linux przygotowane do potrzeb biznesu. Chociaż wiele innych firm twierdzi, że oferuje poufne obliczenia, kluczowe pytanie dotyczy kompleksowych możliwości zapewniających maksymalne bezpieczeństwo i zgodność z przepisami. SUSE Linux umacnia swoją pozycję lidera w dziedzinie poufnych obliczeń dzięki najnowszym wersjom systemów SUSE Linux Enterprise i SUSE Manager oraz obsłudze technologii Intel TDX (Trust Domain Extensions) i AMD SEV (Secure Encrypted Virtualization), w tym dla instancji obliczeniowych hiperskalerów, oraz zdalnemu poświadczaniu za pomocą narzędzia SUSE Manager. Dziś firma SUSE ogłosiła następujące ulepszenia, innowacje i plany rozwoju:

  • SUSE Linux Enterprise Server 15 Service Pack 6: Wraz z tą aktualizacją, SUSE zabezpiecza na przyszłość środowiska IT klientów wprowadzając nowy pakiet Long Term Service Pack Support Core. Zapewni on najdłuższy na rynku okres wsparcia technicznego dla systemów Linux klasy korporacyjnej, kończący się dopiero w 2037 roku. SUSE Linux Enterprise 15 Service Pack 6 (SP6), zaprojektowany z myślą o uproszczeniu operacji IT w centrum danych, chmurze i na brzegu sieci (edge), obniża ryzyko i koszty wdrażania nowych technologii oraz zapewnia ciągłość biznesową, minimalizując planowane i nieplanowane przerwy w świadczeniu usług. Zawiera zaktualizowaną wersję jądra 6.4 i nowe biblioteki, w tym OpenSSL 3.1. Zbudowany w oparciu o certyfikowany łańcuch dostaw bezpiecznego oprogramowania i zgodny z najwyższymi standardami, SUSE Linux Enterprise gwarantuje bezpieczeństwo zgodne z najsurowszymi przepisami.
  • SUSE Linux Enterprise Server for SAP Applications 15 SP6: SUSE Linux Enterprise Server for SAP Applications 15 SP6 to najpopularniejsza, bezpieczna i niezawodna platforma linuksowa dla klientów i partnerów SAP do obsługi najważniejszych aplikacji SAP, od centrum danych po chmurę. Ta wersja zapewnia również dostęp do najnowszych innowacji w Trento, aplikacji open source opracowanej przez SUSE, umożliwiającej lepszą ochronę infrastruktury SAP poprzez diagnozowanie typowych błędów w konfiguracji i sprawdzanie systemów pod kątem najlepszych praktyk SUSE.
  • SUSE Linux Enterprise Micro 6.0: SUSE Linux Enterprise Micro 6.0 to niezmienny (immutable), lekki i bezpieczny system operacyjny typu host o otwartym kodzie źródłowym, zoptymalizowany pod kątem kontenerowych i zwirtualizowanych obciążeń roboczych. Upraszcza samodzielne wdrożenia kontenerów, idealnie nadaje się do urządzeń wbudowanych i zintegrowanych, a jednocześnie zapewnia stabilną platformę do wdrożeń Kubernetesa. W tym wydaniu wprowadzono obsługę pełnego szyfrowania dysków, zwiększając bezpieczeństwo danych klientów, zarówno w centrum danych, jak i poza nim. Zaprojektowany do pracy w dowolnym miejscu, SUSE Linux Enterprise Micro 6.0 umożliwia organizacjom przekraczanie ograniczeń geograficznych i operacyjnych, jednocześnie dostosowując się do różnych scenariuszy wdrażania.
  • SUSE Manager 5.0: Obsługując ponad 16 różnych dystrybucji systemu Linux, SUSE Manager 5.0 jest wiodącym w branży rozwiązaniem do zarządzania wieloma systemami Linux. Umożliwia zautomatyzowane zarządzanie poprawkami i zgodnością w dowolnych systemach Linux z poziomu jednej konsoli, w dowolnym miejscu i na dowolną skalę. Sam SUSE Manager jest skonteneryzowany w celu zwiększenia odporności, skalowalności i przenoszalności, a także uproszczenia procesu instalacji. W tym wydaniu dodano również funkcje zdalnego poświadczania dla SUSE Linux Enterprise Server 15 SP6, aby zapewnić klientom możliwość udowodnienia, że ich zdalne środowiska IT działają w środowisku Confidential Computing w celu zwiększenia zgodności z przepisami.

SUSE pozostaje w pełni zaangażowana w rozwój platformy SUSE Linux Enterprise Server i już teraz pracuje nad zestawem kolejnych innowacji w systemach Enterprise Linux. W 2025 r.  wprowadzi na rynek kolejne główne wydanie swojej flagowej platformy Linux dla biznesu, czyli w SUSE Linux Enterprise Server 16 i SUSE Linux Enterprise Server for SAP Applications 16.

Rozwiązania Enterprise Container Management (ECM)

Wraz ze wzrostem popularności Kubernetesa i konteneryzacji w zastosowaniach biznesowych, użytkownicy potrzebują narzędzi do prowadzenia bezpiecznych i skalowalnych operacji z wykorzystaniem technologii cloud native. Rozwiązania SUSE wspomagają modernizację środowisk IT firm, umożliwiając im szybsze wprowadzanie innowacji i zabezpieczenie infrastruktury na przyszłość. Aby zaspokoić rosnące zapotrzebowanie, SUSE wprowadza ulepszenia do swojej oferty zarządzania kontenerami, obserwowalności całego środowiska, bezpieczeństwa kontenerów i wirtualizacji.

  • Rancher Prime 3.1 – Wersja 3.1 rozszerza obsługę sztucznej inteligencji o aprowizację wirtualnych klastrów, wspiera współdzielenie klastrów bez kompromisów w zakresie izolacji i optymalizuje koszty wykorzystania zasobów, takich jak układy GPU. Dodatkowo, kolekcja aplikacji Rancher Prime dla Kubernetesa obejmuje teraz KubeFlow, oferując bezpieczny stos AI, zapewniając kontrolę danych i wykorzystując rozwiązania tworzone przez społeczność. Wreszcie, wersja 3.1 wydłuża wsparcie cyklu życia do 24 miesięcy i wprowadza 3-letnią opcję Extreme dla RKE2 i k3s.
  • NeuVector Prime 5.4 – Opierając się na rozszerzeniu interfejsu użytkownika Rancher Prime wydanym na początku tego roku, NeuVector Prime 5.4 będzie teraz umieszczać wyniki skanowania bezpośrednio w zasobach Rancher Prime, takich jak pody i węzły, wraz z przyciskiem skanowania. Wersja 5.4 wprowadza również nową strukturę raportowania zgodności, usprawniającą dostępność nowych/zaktualizowanych raportów zgodności (np. DISA-STIG). Dodatkowo, wersja 5.4 wprowadza rozproszone zabezpieczenia przed atakami typu denial-of-service, uruchamiając alerty i blokowanie po przekroczeniu progów maksymalnej szybkości połączeń lub wykorzystania przepustowości.
  • Harvester 1.3.1: Najnowsza wersja natywnej dla chmury platformy wirtualizacyjnej SUSE zapewnia klientom możliwość przejścia ze starszych rozwiązań opartych na VMware i innych maszynach wirtualnych na nowoczesne rozwiązania cloud native. Zaprojektowana z myślą o znacznej poprawie wydajności sztucznej inteligencji i innych obciążeń oraz rozszerzeniu możliwości Harvestera na szerszy zakres platform sprzętowych, wersja ta zapewnia płynną integrację i optymalne wykorzystanie zasobów w różnych środowiskach obliczeniowych. Harvester 1.3.1 wprowadza istotne ulepszenia w zakresie AI i kompatybilności sprzętowej, w tym obsługę najnowocześniejszych architektur vGPU i ARM.

Rozwiązania Edge

Wraz ze wzrostem znaczenia natychmiastowego dostępu do danych, sieci i usług komunikacyjnych, priorytetem dla dostawców usług komunikacyjnych (CSP) i przedsiębiorstw jest szybkie gromadzenie i przetwarzanie danych z inteligentnych urządzeń, czujników i kontrolerów. Zapewnia to tym organizacjom wgląd w procesy biznesowe i daje im narzędzia do dostarczania nowych i innowacyjnych usług użytkownikom.  SUSE Edge umożliwia transformację opartą na rozwiązaniach cloud native, dostarczając wyspecjalizowaną platformę obliczeniową do zarządzania pełnym cyklem życia urządzeń brzegowych na dużą skalę. Ponadto SUSE Adaptive Telco Infrastructure Platform (ATIP) opiera się na SUSE Edge, aby zapewnić zoptymalizowaną pod kątem telekomunikacji platformę przetwarzania brzegowego dla dostawców usług płatniczych w celu przyspieszenia wdrożeń skonteneryzowanych funkcji sieciowych (CNF) w środowiskach sieci telekomunikacyjnych. Firma SUSE poinformowała dziś o kolejnych ulepszeniach wprowadzanych w ofertach SUSE Edge i SUSE ATIP:

  • SUSE Edge 3.0: SUSE Edge 3.0 jest zoptymalizowany do pracy przy ograniczonych zasobach, w odległych lokalizacjach z przerywaną łącznością z Internetem, dzięki czemu idealnie nadaje się do urządzeń wbudowanych. SUSE Edge jest oparty na SUSE Linux Enterprise Micro, wykorzystuje certyfikowane przez CNCF dystrybucje Kubernetes oraz obsługuje zarządzanie i uruchamianie kontenerów, maszyn wirtualnych i mikrousług. Rozwiązanie Edge 3.0 firmy SUSE wykorzystuje możliwości zarządzania Kubernetes Rancher Prime, natywne zabezpieczenia kontenerów w chmurze NeuVector Prime oraz w pełni zweryfikowany, zoptymalizowany pod kątem urządzeń brzegowych i zintegrowany stos rozwiązań działających natywnie w chmurze, zapewniający pełne zarządzanie cyklem życia urządzeń brzegowych na dużą skalę.
  • SUSE Adaptive Telco Infrastructure Platform 3.0: Najnowsza wersja wprowadza komercyjną implementację projektu Sylva fundacji Linux Foundation Europe, co pozwala dostawcom CSP na obniżenie zużycia energii i przestrzeganie zasad open source. To wydanie wprowadza również Edge Image Builder, który umożliwia tworzenie niestandardowych artefaktów wdrożeniowych w celu tworzenia klastrów brzegowych w najbardziej odległych lokalizacjach. ATIP 3.0 zapewnia również dostarczanie klastrów bare metal zgodnie z polityką zerowego zaufania przy użyciu Cluster API (CAPI) i Metal3 (metal kubed).

Organizacje  na całym świecie zdają sobie sprawę z nieograniczonego potencjału sztucznej inteligencji, dlatego firma SUSE zaprezentowała również swoją strategię i plany działania w zakresie sztucznej inteligencji, koncentrując się na zaangażowaniu w otwartą, bezpieczną i zgodną z przepisami sztuczną inteligencję dla przedsiębiorstw. Obejmuje to program wczesnego dostępu SUSE AI Early Access Program, który skupia klientów i partnerów mających kierować przyszłością bezpiecznej sztucznej inteligencji o otwartym kodzie źródłowym.

Więcej informacji o najnowszych innowacjach SUSE w obszarze Linuksa, rozwiązań cloud native oraz edge można znaleźć na stronie http://www.suse.com.