Why Expanding Open Source Skills is Good (For You & Your Business)

Friday, 29 April, 2022

The world and the workforce is forever changed. Many people say that Covid is the reason, but the truth is that it was merely a catalyst that accelerated the inevitability of a more virtual workforce. As a result, organizations around the world are being challenged to shift their attention towards retention programs and initiatives that deliver resources to enrich the employee experience through professional development and career growth.

For software engineers & developers, solution architects, and other technologists, gaining timely and marketable skills in open source has never been more important. Organizations looking to close the skills gap by bringing in new talent to support a modern infrastructure also need to view learning and development programs (both free & paid) for existing staff as an investment, not a cost.

By granting access to learning resources that enhance professional growth, organizations will realize that the measurement of that investment is revealed in a more skilled, productive, and satisfied workforce.

 

Open Source is the Language of Innovation

At SUSE, we provide all our partners with zero-cost training and certifications that help industry professionals gain in-demand open source skills.

When the SUSE One Partner Program was unveiled a few years ago, sales and technical trainings were built in as a core component of program success, and program certifications are needed to advance from Silver to Gold, and from Gold to Platinum in the program. The learning paths we provide help partners elevate their understanding of how to work with and sell SUSE technology, as well as deliver tactical skills and advanced open source knowledge that helps individuals become valued community practitioners.

As a SUSE One partner, you have access to it all:

  • Sales & Technical Sales Training
  • Technical Expert Training
  • Support Training
  • SUSE Academy (in-depth technical training)

Linux and Kubernetes are the backbone of hybrid cloud infrastructure and core to the success of a mixed IT environment. With SUSE One professional & technical certifications, you can gain the sought-after skills and knowledge that will help your organization win with open source products and solutions, deliver cloud-native services, or help ease the challenges of adopting, managing and scaling containers–from SMB to the enterprise.

 

A Program Built for Your Success

If you’re currently a SUSE One partner, be sure to review the updated training and certification courses available through the partner portal. For those of you looking to partner with a leading open source vendor, look no further. The SUSE One Partner Program has a modern structure that was designed to provide flexibility and choice to help partners get started, define their path, and accelerate to success.

Role based learning puts your organization on course to gain access to additional program benefits, discounts, incentives and support options, while giving individual contributors the skills needed to advance professional growth and technical acumen.  

Industry accreditations and certifications are nothing new, but the value they deliver has never been more important. Join SUSE One today, or log in to the portal to get started. 

LOTTE Department Store shapes outstanding services with faster business insights

Monday, 4 April, 2022

“Compared to other Linux distributions, we discovered that SUSE Linux Enterprise Server for SAP Applications is by far the easiest to deploy and configure — and once deployed, the SUSE solution also wins out in terms of stability.” Seo Jun-hyeok, Technology Manager, Consulting Team, LDCC.

To help keep operations running smoothly and deliver frictionless customer experience, South Korea’s largest department store brand, LOTTE Department Store, is increasingly dependent on its IT systems. Its parent organization, LOTTE Group, had relied on SAP solutions to drive core business processes for many years, and the launch of SAP S/4HANA presented an opportunity to build on these capabilities.

To unlock the benefits of real-time insights while ensuring rock-solid reliability, the LOTTE Group’s IT organization (LDCC) selected SUSE Linux Enterprise Server (SLES) for SAP Applications to host its new SAP business systems. The objective was to achieve 100% availability for mission-critical SAP S/4HANA services and to enable the company to harness up to 100x faster reporting.

As part of a wider digital transformation initiative, LOTTE Group initiated a groupwide transition to SAP S/4HANA. The next-generation ERP will become the single, central platform for financial accounting for all group subsidiaries and provide a basis for tighter process integration.

After thorough evaluation of leading server platforms, SUSE was selected as the foundation for the new SAP S/4HANA solution. SLES for SAP Applications was discovered to be by far the easiest solution to deploy and configure, as well as the most stable and best solution. The platform was chosen to support companies across the entire group, including LOTTE Department Store.

Supporting a hybrid-cloud architecture

Working with SUSE, LDCC sized, tested and deployed its new SAP S/4HANA environments on SLES for SAP Applications. The SUSE solution is configured to support the company’s hybrid storage architecture, incorporating storage resources at its on-premises data centers and in the public cloud.

SLES for SAP Applications has delivered 100% uptime for mission-critical SAP S/4HANA services with flawless stability. As LOTTE Department Store moves ahead with its digital transformation, companies across the extended enterprise are now reaping the benefits of rapid access to business insights and are able to accelerate business intelligence reporting by up to 100x.

The team has enjoyed significant performance improvements for end-user analytics reporting tasks, many of which now complete tens or even hundreds of times faster than before. The switch from SAP ERP to SAP S/4HANA has allowed leaders to make faster, better-informed decisions. These enhancements aren’t limited to business intelligence reporting: one subsidiary cut its month-end closing process from several days to just a few hours.

Equipped with real-time insights into business performance, LOTTE Department Store is in a stronger position than ever to deliver high-quality services that delight customers and foster their loyalty. There is also confidence from a LOTTE Group perspective that it has the secure, stable, and future-ready platform to support its ongoing digital transformation.

Click here to find out more about how LOTTE Department Store shapes outstanding services with faster business insights.

SUSE One Continues to Improve, Gain Recognition

Monday, 28 March, 2022

It has been almost two years since the team at SUSE reimagined the framework of our partner program. In response to the changing demands of the market and the channel, we set out to build a structure around six areas of specialization. Each was created with unique partner types in mind, and provides the ability for organizations to adapt, accelerate, and grow their business practice as technology trends and customer requirements continue their evolution.

 

In 2021, that radical change in format to the SUSE One Partner Program earned us a 5-Star Rating from CRN, and today, we’re happy to announce that SUSE has been awarded that honor for the 2nd year in a row.  

What is the CRN 5-Star Rating

The annual Partner Program Guide rating given by CRN includes the most notable partner programs from industry-leading technology vendors that provide innovative products and flexible services through the IT channel. The 5-star rating is only given to vendors that excel in their programs by driving partner focused market opportunity and growth.

Companies are scored based on their investments in program offerings, partner profitability, partner training, education & support, marketing programs & resources, sales support, and communication.

SUSE is honored to receive this recognition again.

Our Ongoing Investments in Partner Success

As the SUSE One Partner Program continues to improve and expand around the unique needs of our ecosystem, so do the tools and resources that support it. Our program team is continually updating incentives, enhancing processes around deal registration, introducing new program training & certification courses, and adding beneficial selling resources.

In addition, the SUSE One partner portal recently underwent a comprehensive design overhaul. As the hub for partner activity, the updates will help SUSE scale program growth and better serve our growing community of partners and open source practitioners.

If you’re an existing partner and haven’t seen the updates, be sure and take a look at the portal. For those considering joining SUSE One, there’s never been a better time to find success with open source and SUSE offerings.

RISE with SAP- still one of the most discussed topics in the SAP community

Tuesday, 15 March, 2022

RISE with SAP is a full-service or transformation as a service offering from SAP that supports companies in their digital transformation. The central solution is the ERP solution SAP S/4HANA in the cloud with two deployment models: The public cloud and the private cloud. Included in the offering are the re-design of business processes, the provision of tools and services for a technical migration to the cloud and the use of platforms and solutions for digital transformation.

There is currently a lot of talk about “RISE with SAP”. But while many customers expressed an initial interest in learning more about the solution, it is unclear how many plan on adopting it in the long term.

Download the SAPinsider Report

SAPinsiders surveyed 238 members of the SAPinsider community in October and November 2021 to understand:

  • What does RISE with SAP means for organization?
  • How does it impact any existing plans for SAP S/4HANA?
  • What can it offer in terms of business transformation?

The survey starts with the first question about how much customers knew about the different components of the RISE with SAP offering and is followed by the motivation for RISE with SAP. It makes sense that the biggest area of interest to respondents is costs. However, respondents also identified several concerns about the RISE with SAP offering. The biggest concern was support for mixed vendor landscapes and project transparency.

Shall I consider RISE with SAP?

Are you planning to use or consider RISE for SAP? Download the survey and get good input on the following questions:

  • Who is adopting RISE with SAP?
  • What features are leveraged?
  • How can RISE drive business transformation projects?
  • What are the steps to be successful with RISE with SAP?

Announced in January 2021, RISE with SAP remains one of the most discussed topics in the SAP space. For many it is just interest in RISE with SAP. Has this changed? What do you think?

Learn more about SAP in the public cloud

Regardless of RISE with SAP, the transformation of SAP S/4HANA remains one of the most important topics in 2022. Organizations are migrating SAP S/4HANA to the public cloud to enable faster business growth, higher productivity, and new avenues for innovation.

SUSE enables you to rapidly deploy and scale mission-critical SAP applications on your choice of hyperscalers with high availability and reduced complexity. Learn how you can accelerate your cloud vision.

 

Premium Support Services for Everyone!

Monday, 1 November, 2021

You discover a problem, and you need to resolve it before it takes your entire infrastructure down.  Which scenario sounds better to you?

Scenario 1

You go to SCC to log a problem and wait for a technical support team to respond.  You then have to describe your environment, your solution stack, and your issue.  The technician has to ask you a number of questions to ensure they are working on the right issue.  After that time period, the technician starts working on your issue to get you to problem resolution.

Scenario 2

You call Jim, your premium support engineer.  Jim knows your environment, your staff skills and your infrastructure.  Because of that Jim can start working on your issue right away and get you to problem resolution in a fraction of time.

If you are like most people, you answered Scenario 2.

As good as SUSE Technical Support is, they simply cannot have a relationship with all of SUSEs customers.  That’s where SUSE Premium Support Services comes in.

What are Premium Support Services?

Premium Support Services enhances your existing SUSE Priority Technical Support by offering a number of white glove benefits, including direct access to a named technical expert – a premium engineer. This engineer knows you, your team and your infrastructure.

Premium Support Services is a 12-month, fixed-cost tiered offering. It provides a number of benefits that are delivered directly to you by a named premium engineer and service delivery manager. Your premium team will:

  • Deliver faster time to value… by ensuring that your SUSE solutions are optimized for your specific business objectives.
  • Ensure business continuity… with proactive maintenance and monitoring of your specific systems
  • Help you meet changing business demands… with flexible and cost-effective offerings, providing the level of service you need and access to named service delivery managers who will keep you abreast of technology trends.

Let’s face it.  Downtime is expensive – not only in cost but in customer satisfaction and retention.  Your premium support team helps you avoid downtime by helping you keep your systems finetuned and on top of technology trends.   With your team in place, you can quickly solve small issues before they escalate into big problems that cause system downtime.

Having Premium Support Services is really the best insurance policy your business can invest in.

But Can I Afford It?

A service offering like Premium Support has to be expensive, right?  Wrong! Because Premium Support Services comes in different tiers, there really is an option to fit every sized business.

Today, in addition to our Silver, Gold and Platinum Tiers, we are announcing the Bronze Tier of Premium Support Services

Premium Support Services - The Tiers and Benefits

As our entry level option, the Bronze Tier gives you:

  • Direct Access to a Named Technical Professional
  • Direct Access to a Service Delivery Manager
  • Up to 60 hours dedicated time and/or 10 Service Requests handled by your premium engineer

And the best part: the Bronze Tier is so attractively priced every business can take advantage of this “white glove” support service.  That is,  your  business cannot afford to be without this service offering.

Let’s Get Started

Get started with Premium Support Services today. Here are just a few ways:

Want to learn more, read the flyer.

Get set up with Premium Services today, and the next time you have an issue, you’ll be direct-dialing your premium engineer!

Accelerating Machine Learning with MLOps and FuseML: Part One

Sunday, 25 July, 2021

Building successful machine learning (ML) production systems requires a specialized re-interpretation of the traditional DevOps culture and methodologies. MLOps, short for machine learning operations, is a relatively new engineering discipline and a set of practices meant to improve the collaboration and communication between the various roles and teams that together manage the end-to-end lifecycle of machine learning projects.

Helping enterprises adapt and succeed with open source is one of SUSE’s key strengths. At SUSE, we have the experience to understand the difficulties posed by adopting disruptive technologies and accelerating digital transformation. Machine learning and MLOps are no different.

The SUSE AI/ML team has recently launched FuseML, an open source orchestration framework for MLOps. FuseML brings a novel holistic interpretation of MLOps advocated practices to help organizations reshape the lifecycle of their Machine Learning projects. It facilitates frictionless interaction between all roles involved in machine learning development while avoiding massive operational changes and vendor lock-in.

This is the first in a series of articles that provides a gradual introduction to machine learning, MLOps and the FuseML project. We start here by rediscovering some basic facts about machine learning and why it is a fundamentally atypical technology. In the next articles, we will look at some of the key MLOps findings and recommendations and how we interpret and incorporate them into the FuseML project principles.

MLOps Overview

Old habits that need changing can be difficult to unlearn, even more difficult than re-learning everything. It’s true for people, and it’s even truer for teams and organizations where the combined inertia that makes important changes difficult to implement is several orders of magnitude greater.

With the AI hype on the rise, organizations have been investing more and more in machine learning to make better and faster business decisions or automate key aspects of their operations and production processes. But if history taught us anything about adopting disruptive software technologies like virtualization, containerization and cloud computing, it’s that getting results doesn’t happen overnight. It often requires significant operational and cultural changes. With machine learning, this challenge is very pronounced, with more than 80 percent of AI projects failing to deliver business outcomes, as reported by Gartner in 2019 and repeatedly confirmed by business analysts and industry leaders throughout 2020 and 2021.

Naturally, following this realization about the challenges of using machine learning in production, a lot of effort went into investigating the “whys” and “whats” about this state of affairs. Today, the main causes of this phenomenon are better understood. A brand new engineering discipline – MLOps – was created to tackle the specific problems that machine learning systems encounter in production.

The recommendations and best practices assembled under the MLOps label are rooted in the recognition that machine learning systems have specialized requirements that demand changes in the development and operational project lifecycle and organizational culture. MLOps doesn’t propose to reinvent how we do DevOps with software projects. It’s still DevOps but pragmatically applied to machine learning.

MLOps ideas can be traced back to the defining characteristics of machine learning. The remainder of this article is focused on revisiting what differentiates machine learning from conventional programming. We’ll use the fundamental insights in this exercise as stepping stones when we dive deeper into MLOps in the next chapter of this series.

Machine Learning Characteristics

Solving a problem with traditional programming requires a human agent to formulate a solution, usually in the form of one or more algorithms, and then translate it into a set of explicit instructions that the computer can execute efficiently and reliably. Generally speaking, conventional programs, when correctly developed, are expected to give accurate results and to have highly predictable and easily reproducible behaviors. When a program produces an erroneous result, we treat that as a defect that needs to be reproduced and fixed. As a best practice, we also process conventional software through as much testing as possible before deploying it in production, where the business cost incurred for a defect could be substantial. We rely on the results of proactive testing to give us some guarantees about how the program will behave in the future, another characteristic derived from the predictability aspect of conventional software. As a result, once released, a software product is expected to take significantly less effort to maintain compared to development.

Some of these statements are highly generic. One might say they could even be used to describe products in general, software or otherwise. They all have in common that they no longer hold as entirely valid when applied to machine learning.

Machine learning algorithms are distinguished by their ability to learn from experience (i.e., from patterns in input data) to behave in a desired way, rather than being programmed to do so through explicit instructions. Human interaction is only required during the so-called training phase when the ML algorithm is carefully calibrated and data is fed into it, resulting in a trained program, also called an ML model. With proper automation in place, it may even seem that human interaction could be eliminated. Still, as we’ll see later in this post, it’s just that the human responsibilities shift from programming to other activities, such as data collection and processing and ML algorithm selection, tuning and monitoring.

Machine learning can be used to solve a specific class of problems:

  • the problem is extremely difficult to solve mathematically or programmatically, or it has only solutions that are too computationally expensive to be practical
  • a fair amount of data exists (or can be generated) containing a pattern that an ML algorithm can learn

Let’s look at two examples, similar but situated at opposite ends of the spectrum as far as utility is concerned.

Sum of Two Numbers

A very simple example, albeit with no practical application whatsoever, is training an ML model to calculate the sum of two real numbers. Doing this with conventional programming is trivial and always yields very accurate results.

Training and using an ML model for the same task could be summarized by the following phases:

Data Preparation

First, we need to prepare the input data that will be used to train the ML model. Generally speaking, training data is structured as a set of entries. Each entry associates a concrete set of values used as input for the target problem with the correct answer (sometimes known as a target or label in ML terms). In our example, each entry maps a pair of real input values (X, Y) to the desired result (X+Y) that we expect the model to learn to compute. For this purpose, we can generate the training data entirely using conventional programming. Still, it’s often the case with machine learning that training data is not readily available and expensive to acquire and prepare. The code used to generate the input dataset could look like this:

import numpy as np 
train_data = np.array([[1.0,1.0]])
train_targets = np.array([2.0])
for i in range(3,10000,2):
  train_data = np.append(train_data,[[i,i]],axis=0)
  train_targets = np.append(train_targets,[i+i])

Deciding what kind of data is needed, how much of it and how it needs to be structured and labeled to yield acceptable results during ML training is the realm of data science. The data collection and preparation phase is critical to ensuring the success of ML projects. It takes experimentation and experience to find out which approach yields the best result, and data scientists often need to iterate several times through this phase and improve the quality of their training data to raise the accuracy of ML models.

Model Training

Next, we need to define the ML algorithm and train it (also known as fitting) on the input data. For our goal, we can use an Artificial Neural Network (ANN) suitable for this type of problem (regression). The code for it could look like this:

import tensorflow as tf
from tensorflow import keras
import numpy as np


model = keras.Sequential([
  keras.layers.Flatten(input_shape=(2,)),
  keras.layers.Dense(20, activation=tf.nn.relu),
  keras.layers.Dense(20, activation=tf.nn.relu),
  keras.layers.Dense(1)
])


model.compile(optimizer='adam', 
  loss='mse',
  metrics=['mae'])


model.fit(train_data, train_targets, epochs=10, batch_size=1)

Similar to data preparation, deciding which ML algorithm to use and what values should be configured for its parameters for best results (e.g., the neural network architecture, optimizer, loss, epochs) requires specific ML knowledge and iterative experimentation. However, by now, ML is mature enough to make finding an algorithm that fits the problem not difficult, especially given that there are countless open source libraries, examples, ready-to-use ML models and documented use-case patterns and recipes available for all major classes of problems that can be solved with ML, that one can start from. Moreover, many of the decisions and activities required to develop a high-performing ML model (e.g., hyper-parameter tuning, neural architecture search) can already be fully automated or accelerated through partial automation through a special category of tools called AutoML.

Model Prediction

We now have a trained ML model that we can use to calculate the sum of any two numbers (i.e. make predictions):

def sum(x, y):
  s = model.predict([[x, y]])[0][0]
  print("%f + %f = %f" % (x, y, s))

The first thing to note is that the summation results produced by the trained model are not at all accurate. It’s fair to say that the ML model is not behaving like it’s calculating the result, but more like it’s giving a ballpark estimation of what the result might be, as shown in this set of examples:

# sum(2000, 3000)
2000.000000 + 3000.000000 = 4857.666992
# sum(4, 5)
4.000000 + 5.000000 = 9.347977

Another notable characteristic is, as we move further away from the pattern of values on which the model was trained, the model’s predictions get worse. In other words, the model is better at estimating summation results for input values that are more similar to the examples on which it was trained:

# sum(10, 10000)
10.000000 + 10000.000000 = 8958.944336
# sum(1000000, 4)
1000000.000000 + 4.000000 = 1318969.375000
# sum(4, 1000000)
4.000000 + 1000000.000000 = 895098.750000
# sum(0.1, 0.1)
0.100000 + 0.100000 = 0.724608
# sum(0.01, 0.01)
0.010000 + 0.010000 = 0.549576

This phenomenon is well known to ML engineers. If not properly understood and addressed, it can lead to ML specific problems that take various forms and names:

  • bias: using incomplete, faulty or prejudicial data to train ML models that end up producing biased results
  • training-serving skew: training an ML model on a dataset that is not representative of the real-world conditions in which the ML model will be used
  • data drift, concept drift or model decay: the degradation, in time, of the model quality, as the real-world data used for predictions changes to the point where the initial assumptions on which the ML model was trained are no longer valid

In our case, it’s easy to see that the model is performing poorly due to a skew situation: we inadvertently trained the model on pairs of equal numbers, which is not representative of the real-world conditions in which we want to use it. Our model also completely missed the point that addition is commutative, but that’s not surprising, given that we didn’t use training data representative of this property either.

When developing ML models to solve complex, real-world problems, detecting and fixing this type of problem is rarely that simple. Machine learning is as much an art as it is a science and engineering endeavor.

In training ML models, there is usually also a validation step involved, where the labeled input data is split, and part of it is used to test the trained model and calculate its accuracy. This step is intentionally omitted here for the sake of simplicity. The full exercise of implementing this example, with complete code and detailed explanations, is covered in this article.

The Three-Body Problem

At the other end of the spectrum is a physics (classical mechanics) problem that has inspired one of the greatest mathematicians of all times, Isaac Newton, to invent an entirely new branch of math and nowadays a source of constant frustration among high school students: Calculus.

Finding the solution to the set of equations that describe the motion of two celestial bodies (e.g., the Earth and the Moon) given their initial positions and velocities is already a complicated problem. Extending the problem to include a third body (e.g., the Sun) complicates things to the point where a solution cannot be found, and the entire system starts behaving chaotically. With no mathematical solution in sight, Newton himself felt that supernatural powers had to be at play to account for the apparent stability of our solar system.

This problem and its generalized form, the many-body problem, are so famous because solving them is a fundamental part of space travel, space exploration, cosmology and astrophysics. Partial solutions can be calculated using analytical and numerical methods, but it requires immense computational power.

All life forms on this planet are constantly used to dealing with gravity. We are well equipped to learn from experience, and we’re able to make pretty accurate predictions regarding its effects on our bodies and the objects we interact with. It is not entirely surprising that Machine Learning can estimate the motion of objects under the effect of gravity.

Using Machine Learning, researchers at the University of Edinburgh have been able to train an ML model capable of solving the three-body problem 100 million times faster than traditional means. The full story covering this achievement is available here, and the original scientific paper can be read here.

Solving the three-body problem with ML is similar to our earlier trivial example of adding two numbers together. The training and validation datasets are also generated through simulation, and an ANN is also involved here, albeit one with a more complex structure. The main differences are the complexity of the problem and ML’s immediate practical application to this use case. However, the observations previously stated about general ML characteristics apply equally to both cases, regardless of complexity and utility.

Conclusion

We haven’t even begun to look at MLOps in detail. Still, we can already identify and summarize key takeaways representative of ML in general just by comparing classical programming to Machine Learning:

  1. Not all problems are good candidates for machine learning
  2. The process of developing ML models is iterative, exploratory and experimental
  3. Developing a machine learning system requires dealing with new categories of artifacts with specialized behaviors that don’t fit the patterns of conventional software
  4. It’s usually not possible to produce fully accurate results with ML models
  5. Developing and working with machine learning based systems requires a specialized set of skills, in addition to those needed for traditional software engineering
  6. Running ML systems in the real world is far less predictable than what we’re used to with regular software
  7. Finally, developing ML systems would be next to impossible without specialized tools

Machine Learning characteristics summarized here are reflected in the MLOps discipline and distilled in the principles on which we based the FuseML orchestration framework project. The next article will give a detailed account of MLOps recommendations and how an MLOps orchestration framework like FuseML can make developing and operating ML systems an automated and frictionless experience.

Category: Featured Content, Rancher Kubernetes Comments closed

7 Digital Transformation Questions IT Should Ask Their Business Managers

Wednesday, 3 March, 2021

During the journey of digital transformation, organizations have to master several things at the same time: adopting new innovations, increasing efficiency, and maintaining continuity. IT not only plays a crucial role in these improvements but in many cases also leads transformation projects that improve the business.

Collaboration between IT and business can be a challenge when your teams come from different backgrounds and have different priorities. But alignment is critical nonetheless because misunderstanding and diverging priorities can lead to poor outcomes: missteps, slow delivery of projects or new applications, and unnecessary failures along the way.

How do high-performing organizations overcome these challenges? One effective way is to reduce gaps between IT and the business by building multidisciplinary teams. With closer contact and alignment of purpose, these integrated teams can work fast and agile. But even they can make mistakes when translating business needs to IT requirements.

Based on experience working with leading global companies, we have compiled seven of the most important questions IT should ask its business counterparts. IT can be more effective when it integrates these questions into the discovery and planning phases, and when it works on cross-functional teams that present opportunities to pose questions at a consistent cadence.

Seven Key Questions for Transformation Success

Here are seven key questions you can ask to fully understand business requirements and to build trust that leads to greater success in project execution.

What is your ultimate objective?

Initiatives aiming for an end goal — improving the customer experience, creating new products or services, or building resilience to disruption — need IT to translate the vision into strategy.

IT should build deep knowledge about the lines of business it serves, and add context from its users, to create its technology strategy. Consider building cross-functional leadership teams, representing both IT and business interests, to communicate how initiatives contribute to business transformation. This establishes a common base of understanding and keeps lines of communication open.

What business value does it bring?

Sometimes the business asks for changes that don’t bring value. While IT strives to serve the business, it is possible to go too far in responding to business user requests.

To help clarify the value of requests, IT should work on building context: by gaining knowledge of your business colleagues’ products and services, understanding the competitive landscape, and staying up-to-date about the regulatory environment.

With this knowledge, IT can supply technical information that empowers business leaders to build a more robust value proposition for the CFO or leadership committee to approve.

Who are the stakeholders?

The multiple projects in your transformation pipeline span departments, each of which has its own expertise and responsibilities. Each initiative needs the objectives, process owners, and progress status to be clear to all participants so you can spot potential conflicts, remove bottlenecks, and achieve the best outcome.

Make sure IT and business groups agree about the goals, responsibilities, and priorities for each project — and that each participant understands the timeline to complete their contributions.

What should the customer journey look like?

Innovation brings opportunities to engage customers with personalized experiences through new products and channels. Define the experiences your organization wants to deliver, then set goals for your cross-functional teams to meet these objectives.

To fully answer questions about the customer journey, bring together perspectives from your customer experience leaders, business units, and application development and delivery teams. The new process maps can help you chart future improvements.

What new business processes are required?

In every project, there may be any number of unstated assumptions about new capabilities IT should deliver.  These assumptions include integration, scalability, and a range of user needs. If IT is too keen to adapt and change, it might miss hidden roadblocks that stand in the way of fully meeting business needs.

For big projects involving process changes, don’t accept a flurry of change requests right away. Instead, uncover the reasoning behind business decisions and any assumptions your partners are making.  Then, create a plan that includes all the technical requirements for the new business processes.

How must existing business processes be changed to support this?

Your business customers probably want the ability to move more quickly, with greater agility and flexibility. Ask questions to understand their roadblocks and areas they want to improve.  Consider these as a starting point to integrate automation.

Intelligent automation can help you meet the business goals for speed, performance, and resiliency. Business goals include accelerating the development of customer-facing apps and managing infrastructure with greater security and reliability.

What is your ideal and realistic timeframe?

Seek to understand not only the end goal but milestones your business colleagues want to hit along the way. Creating a timeline of the full project, with incremental objectives, can help IT divide projects into manageable pieces.

This approach helps you deliver the business processes and capabilities the business needs.  At the same time, you will be demonstrating IT value earlier and more often as you execute the project.

The path toward closer alignment  

Generating answers to these seven questions provide closer collaboration and alignment with your business partners. This is a starting point to help simplify your planning process, modernize your IT infrastructure in line with business needs, and accelerate the deployment of innovative solutions.

SUSE works with leading companies around the world. Learn from their experience with our eBook,  “Successes in IT infrastructure transition.”

To take the next step in your journey of IT transformation, contact SUSE today or learn more on this web page.

The Business Case for Container Adoption

Tuesday, 2 April, 2019

Developers often believe that demonstrating the need for an IT-based solution should be very easy. They should be able to point to the business problem that needs a solution, briefly explain what technology should be selected, and the funds, staff, and computer resources will be provided by the organization. Unfortunately, this is seldom the actual process that is followed.

Developing a Business Case for New Technology Isn’t Always Easy

Most organizations require that both a business and a technical case be made before a project can be approved. Depending on the size and culture of the organization, building both cases can be a long, and sometimes arduous, process.

Part of the challenge developers face can be summed up simply: business decision-makers and technical decision-makers have different priorities, use different metrics, and, in short, think differently.

Business Managers Think in Different Terms Than Developers

Business decision-makers are almost always thinking in terms of the investment required, the costs expected, and the revenues the organization can expect that can be attributed to the successful completion of the project not the technical merit, the tools selected, or the development methodology that will be deployed to complete the project.

They may use technology every day, but many think of it as a means to an end, not something they enjoy using.

As David Ingram pointed out in his recent article on business decision making, managers often use a 7-step process:

  1. Identify the problem
  2. Seek information to clarify what’s actually happening
  3. Brainstorm potential solutions
  4. Weigh the alternatives
  5. Choose an alternative
  6. Implement the chosen plan
  7. Evaluate the outcome

You’ll note that the best technology, the best approach to development, the best platform, how to achieve the best performance, how to achieve the highest levels of availability, and other technical factors that technologists consider may be seen as secondary issues. From the perspective of a business decision-maker, the extensive work that constitutes this type of evaluation might all be wrapped up into the “weigh the alternatives” step.

Factors of the Business Decision

Let’s break this down a bit. Business decision-makers will consider the *overall** investment required and weigh it against the potential benefits that might be received. This includes a number of factors that may not appear to be directly associated with a specific project.

They also will be considering if this the right project to be addressing at this time or whether other issues are more pressing.

While working with an executive at a major IT supplier, I was once told “solving the wrong problem, no matter how efficiently and well-done, is still solving the wrong problem.”

Here are a few of the factors they are likely to consider:

  • Staff: the number of staff, the levels of expertise, the amount of time they’ll need to be assigned to the project, the business overhead associated with having those people on staff, whether they should be full-time, part-time, or contractors
  • Costs: the costs of all resources required, including:
    • Data center operational costs: floor space, power, air conditioning, networking, maintenance, real estate
    • Systems: number of systems, memory required, external storage, maintenance
    • Software: software licenses, software maintenance
  • Time to market: can this project be completed quickly enough to address the needs of the market. This sometimes is called “time to profit.”
  • Revenues: will the project directly or indirectly lead to increased revenues?

If the costs of doing the project outweigh the projected revenues that can be attributed to the completion of the project, the business decision-makers are likely to look for another solution which may include not doing it at all, purchasing a packaged software product that will solve the problem in a general way, or subscribing to an online service that will address the issue.

In the end, business decision-makers will be focused on increasing the organization’s revenues and decreasing its costs.

What Developers Think About

Developers, on the other hand, tend to think more about the technical problem in front of them and how it can be solved.

What Needs to Be Accomplished

Often, a developer’s first consideration is to fully understand what needs to be accomplished to address the situation. It is quite possible that the developers will be unable to focus on the issues in a way that takes into account the needs of the whole organization. This siloed perspective sometimes results in several business units solving the same problem in different, and sometimes incompatible, ways.

How It Can Be Accomplished

The next consideration for developers is how a solution can be accomplished. Developers are very busy people and need to get things done quickly and efficiently. This often means that they select development tools and methodology they are most familiar with rather than casting about to discover new, and potentially better, approaches. The result is that, from an outsider’s perspective, developers will select the same tool regardless of if it is the best one for the job. As Abraham Maslow pointed out, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (“The Psychology of Science” 1962).

How To Systematize or Automate Solutions

Developers have a tendency to also focus on how to systematize or automate the approach to accomplishing a solution. Developers who have experience introducing new systems will not only consider how to accomplish this difficult task, but also whether the current manual processes have some merit as well.

Costs Are Often Ignored or Secondary to Other Considerations

Developers often do not have access to reports showing the overall costs, the investment required, or even the revenues of a given project. Since they are busy working on projects, they often don’t think about those factors at all. This situation, by the way, is the root of many communication challenges faced when developers are attempting to persuade business decision-makers to approve a project. They don’t have all of the data they need.

I’m reminded of a conversation with a CFO of my company who didn’t understand the need for a different type of database than the one used by the company for another purpose. At first I thought of him a “a man who knows the price of everything and the value of nothing,” to quote Oscar Wilde.

After thinking about his comments, I built a different justification that focused on speaking to him in his own language by discussing the project in terms of the investment required, the costs that were going to be incurred, and the revenue potential the new approach would provide. It took some work to obtain that information, but it was worth the effort in the end.

It was only after a longer conversation with the CFO that he began to be able to understand why Lotus Notes wasn’t the best tool for the creation of a transaction-oriented system for research and analysis.

Are you speaking to your business decision-makers using acronyms, development procedures and the names of open source projects you’d like to deploy? If so, you’re not helping your cause.

Where to Start

A good place to start is to think in terms of where and how money can be saved, where and how previous investments can be enhanced or reused rather than being discarded, and how your proposed project would result in increased opportunities for revenue.

It would also be wise to offer a vision of how the use of containers will help the organization achieve its overall goals, including factors such as:

  • Scaling to address the needs of a larger or at least a new market
  • Reducing overall IT costs
  • Allow the organization to rapidly adapt to a rapidly changing environment, to take advantage of emerging opportunities
  • Quickly develop new products or services
  • Reach new customers while being able to maintain relationships with today’s customer base

For Many Companies Adoption of Containers Must Be Carefully Justified

The move to a Container-based environment is one of those journeys that developers can easily understand as beneficial that can be challenging to justify to a business decision-maker.

After all, some things aren’t fully known until they’ve been done at least once. So, quantifying investments required, cost savings that will be realized, and the actual size of revenue increases can be difficult.

What can be said is that adopting Containers can reduce costs and reduce risk by supporting rapid and inexpensive prototyping of solutions. Pointing out that doing this prototyping in inexpensive cloud computing services rather than acquiring new systems would help them understand that you are focused on meeting your objects while still helping the organization keep costs under control. Tell the business decision makers that this approach also offers them a choice in the future. Once something is developed, documented, and proven to be able to do the job, it can either stay where it is or be moved in-house depending upon which will be the best overall business decision.

Where Can Containers Help a Company Reduce Costs?

Developers understand that being able to decompose a problem into smaller, more manageable problems can improve their efficiency, reduce their time-to-solution, and make reuse of code and services easier.

Reducing the Number of Operating System Instances to Maintain

Explain that containerized applications need fewer copies of operating systems when compared to using virtual machine technology, less processor power, less system memory, and less external storage. Developers can to speak in terms of reducing system requirements and how they can result in a direct savings that the business decision-makers can appreciate.

A few related factors are helpful to bring up as well. This approach reduces the number of software licenses that are required and the cost of software maintenance agreements.

Increasing the Amount of Useful Work Systems Can Accomplish

Since the systems won’t be carrying the heavy weight of unneeded operating systems for each application component or service, performance should be improved. After all, switching from one container to another is much faster than switching from one VM to another. There is no need to roll huge images into and out of storage.

Improving Productivity

Since productivity is important to most organizations, show that a move to containers is a great foundation for the use of a rapid application development and deployment (DevOps) strategy. By decomposing applications into functions, application development can be faster because functions are easier to build, document, and support. This should result in lower development costs while improving overall time to solution.

This approach also can reduce the time to deployment because functions can be developed in parallel by smaller independent teams.

Improving Application Capabilities

Adopting a container-based approach provides a number of other benefits that should be mentioned as well, including:

  • Container management and automation functions are improving all the time which should result in lower costs of administration and operations
  • Container workload management and migration technology is also improving all the time which should result in higher levels of application availability, higher levels of performance, and fewer losses due to downtime
  • Decomposing applications into independent functions and services also makes them easier to develop and maintain which should reduce the costs of development, support, and operations

Facilitating a Move to the Cloud

Most business decision-makers have read about cloud computing, but don’t really understand how it can be adopted. Help them understand that the adoption of containers can facilitate the organization’s ability to deploy functions or complete applications locally, in the cloud, or in a combined hybrid environment, quickly and easily.

So, the answer to the question of whether to move to the cloud or continue on-premise computing is “yes, both.”

Reducing Time to Profit

When the business decision-maker begins to understand the business benefits of containerization, they’ll also see that this approach not only can reduce the overall time to market for applications, but, more importantly, it can reduce the time to profit. Lower development and support costs combined with rapid development can lead to quicker streams of revenue and profit.

Establishing a Foundation for the Future

It is also helpful for the business decision-maker to understand that one of your goals is establishing a platform for the future. Containers are supported in many different computing environments, by many different suppliers, and the organization gets the benefits.

Some of those benefits are:

  • Containerized functions can be used as part of many applications without having to be rearchitected or redeveloped
  • They can be enhanced or updated as needed without requiring other unrelated functions to be changed.
  • Support of the application can be easier and less costly.
  • Scalability is improved since the same functions can be run in multiple places with the help of workload management technology

How Can Containers Help a Company Increase Revenue?

A key question to consider is how adopting Containers can help the company increase its revenues. There are a number of elements that directly and indirectly address that question.

Since applications can be developed quicker, perform better, and can be supported more easily, the organization can address a rapidly changing business and regulatory environment more effectively. This also means that the organization can capture additional market share from organizations that continue to only use older approaches to information systems.

It also means that the organization can conduct experiments and prototype solutions quickly. This means that the organization can succeed or fail quicker and that organizational learning will be accelerated.

Where an application or its components execute are flexible. This means that a successful solution can execute locally, in the cloud, or in both places as needed. Business decision-makers usually appreciate flexible solutions that don’t impose extra costs.

This approach also ensures that the resulting solutions can scale from small to large as needed. So, organizations can feel more comfortable trying out something new and know that if it succeeds, it can be put into production effectively. Business decision-makers are often encouraged by approaches that allow for a low investment at first and with opportunities for growth as revenues increase rather than forcing a heavy investment up front. This means that the organization is exposed to lever levels of risk.

Summary

Adopting a container-focused approach can be beneficial to both technical and business decision-makers because it addresses the needs for rapid and effective solution development and reduction in overall costs and risks. It also results in a foundation for future growth and the ability to address a changing market.

This approach brings greater complexity along with it, but the benefits outweigh the challenges in many environments. The rapid improvement in container system management, automation, as well as the strong industry support for this approach makes it a safer choice.

If developers focus on helping business decision-makers understand how this approach also facilitates lower costs, improved time to market, and time to profit, the business side is likely to get on board quicker. They are likely to appreciate the reduced costs of solution support, operations, and development. They are also likely to be pleased that future investment can be based on revenue production rather than facing investing up front based upon a rosy forecast for future revenues.

Developing a Strategy for Kubernetes adoption

Like containers, Kubernetes sits at the intersection of DevOps and ITOps and many organizations are trying to figure out key questions such as: who should own kubernetes, how many clusters to deploy, how to deliver it as a service, how to build a security policy, and how much standardization is critical for adoption. Rancher co-founder Shannon Williams discusses these questions and more in the free online class Building an Enterprise Kubernetes Strategy.

Tags: ,, Category: Uncategorized Comments closed

Rancher 2.2 Hits the GA Milestone

Tuesday, 26 March, 2019
Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

We released version 2.2.0 of Rancher today, and we’re beyond excited. The latest release is the culmination of almost a year’s work and brings new features to the product that will make your Kubernetes installations more stable and easier to manage.

When we released Preview 1 in December and Preview 2 in February, we
covered their features extensively in blog articles, meetups, videos,
demos, and at industry events. I won’t make this an article that
rehashes what others have already written, but in case you haven’t seen
the features we’ve packed into this release, I’ll do a quick recap.

Rancher Global DNS

There’s a telco concept of the “last mile,” which is the final
communications link between the infrastructure and the end user. If
you’re all in on Kubernetes, then you’re using tools like CI/CD or some
other automation to deploy workloads. Maybe it’s only for testing, or
maybe your teams have full control over what they deploy.

DNS is the last mile for Kubernetes applications. No one wants to deploy
an app via automation and then go manually add or change a DNS record.

Rancher Global DNS solves this by provisioning and maintaining an
external DNS record that corresponds to the IP addresses of the
Kubernetes Ingress for an application. This, by itself, isn’t a new
concept, but Rancher will also do it for applications deployed to
multiple clusters.

Imagine what this means. You can now deploy an app to as many clusters
as you want and have DNS automatically update to point to the Ingress
for that application on all of them.

Rancher Cluster BDR

This is probably my favorite feature in Rancher 2.2. I’m a huge fan of
backup and disaster recovery (BDR) solutions. I’ve seen too many things
fail, and when I know I have backups in place, failure isn’t a big deal.
It’s just a part of the job.

When Rancher spins up a cluster on cloud compute instances, vSphere, or
via the Custom option, it deploys Rancher Kubernetes Engine (RKE).
That’s the CNCF-certified Kubernetes distribution that Rancher
maintains.

Rancher 2.2 adds support for backup and restore of the etcd datastore
directly into the Rancher UI/API and the Kubernetes API. It also adds
support for S3-compatible storage as the endpoint, so you can
immediately get your backups off of the hosts without using NFS.

When the unthinkable happens, you can restore those backups directly
into the cluster via the UI.

You’ve already been making snapshots of your cluster data and moving
them offsite, right? Of course you have.…but just in case you
haven’t, it’s now so easy to do that there’s no reason not to do it.

Rancher Advanced Monitoring

Rancher has always used Prometheus for monitoring and alerts. This
release enables Prometheus to reach even further into Kubernetes and
deliver even more information back to you. One of the flagship features
in Rancher is single cluster
multi-tenancy
,
where one or more users have access to a Project and can only see the
resources within that
Project

even if there are other users or other Projects on the cluster.

Rancher Advanced Monitoring deploys Prometheus and Grafana in a way that
respects the boundaries of a multi-tenant environment. Grafana installs
with pre-built cluster and Project dashboards, so once you check the box
to activate the advanced metrics, you’ll be looking at useful graphs a
few minutes later.

Rancher Advanced Monitoring covers everything from the cluster nodes to
the Pods within each Project, and if your application exposes its own
metrics, Prometheus will scrape those and make them available for you to
use.

Multi-Cluster Applications

Rancher is built to manage multiple clusters. It has a strong
integration with Helm via the Application
Catalog
, which takes
Helm’s key/value YAML and turns it into a form that anyone can use.

In Rancher 2.2 the Application Catalog also exists at the Global level,
and you can deploy apps via Helm simultaneously to multiple Projects in
any number of clusters. This saves a tremendous amount of time for
anyone who has to maintain applications in different environments,
particularly when it’s time to upgrade all of those applications.
Rancher will batch upgrades and rollbacks using Helm’s features for
atomic releases.

Because multi-cluster apps are built on top of Helm, they’ll work out of
the box with CI/CD systems or any other automated provisioner.

Multi-Tenant Catalogs

In earlier versions of Rancher the configuration for the Application
Catalog and any external Helm repositories existed at the Global level
and propagated to the clusters. This meant that every cluster had access
to the same Helm charts, and while that worked for most installations,
it didn’t work for all of them.

Rancher 2.2 has cluster-specific and project-specific configuration for
the Application Catalog. You can remove it completely, change what a
particular cluster or project has access to, or add new Helm
repositories for applications that you’ve approved.

Conclusion

The latest version of Rancher gives you the tools that you need for “day
two” Kubernetes operations — those tasks that deal with the management
and maintenance of your clusters after launch. Everything focuses on
reliability, repeatability, and ease of use, because using Rancher is
about helping your developers accelerate innovation and drive value for
your business.

Rancher 2.2 is available now for deployment in dev and staging environments as rancher/rancher:latest. Rancher recommends that production environments hold out for rancher/rancher:stable before upgrading, and that tag will be available in the coming days.

If you haven’t yet deployed Rancher, now is a great time to start! With two easy steps you can have Rancher up and running, ready to help you manage Kubernetes.

Join the Rancher 2.2 Online Meetup on April 3rd

To kick off this release and explain in detail each of these new, powerful features, we’re hosting an Online Meetup on April 3rd. It’s free to join and there will be live Q&A with the engineers who directly worked on the project. Get your spot here.

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.