Cloud Cost Management with ML-based Resource Predictions (Part II)

Wednesday, 5 January, 2022

Cloud and Kubernetes cost optimization has huge promise and pointing exciting new technologies and methodologies at the problem is accelerating the fulfilment of that promise. We’ve invited Prophetstor, a SUSE One partner, to share valuable insights and their experience with tackling today’s cloud cost management challenges to speed your time to maximum value from your cloud and Kubernetes deployments.

In part two of this two part series (see part one), they’ll cover continuous rightsizing along with performance and cost optimization. ~Bret

SUSE GUEST BLOG ARTICLE AUTHORED BY:
Ming Sheu, EVP Products, ProphetStor

Reducing Cost by Continuous Rightsizing

As outlined in the Guidance Framework introduced in part one of this post, once users gain visibility of spending metrics, they must see opportunities to reduce monthly bills. Federator.ai provides the visibility of cloud spending at different resource levels (clusters, cluster nodes, namespaces, applications, containers), and it makes finding ways to reduce the cost an easier task. For example, it is well documented that most containers deployed in Kubernetes clusters use far less allocated CPU and memory resources. This indicates a huge opportunity to reduce the overall cloud cost by allocating the appropriate resources for applications. However, finding the right size of resources for applications is not straightforward. With Federtor.ai’s predictive analytics, users receive the right recommendations on resource allocation without suffering potential performance risks.

An essential suggestion by the Guidance Framework when looking to reduce the cost of deployment is to understand the utilization patterns of applications and services.  Users should schedule cloud services and applications based on the utilization pattern from the historical data collected. Federator.ai not only provides a resource utilization heatmap based on historical usage metrics, but it also provides a utilization heatmap that further sheds light on how resources will be consumed in the future. The utilization patterns, either daily, weekly, or monthly, give users a clear view of both when and how much resources were used and will be used. This information helps users to decide what compute instances could be reduced or shut down during off-hours without performance impact.

The Guidance Framework also recommends right-sizing allocation-based services as an effective way of reducing the cost. It is well known that many applications end up using resources at a much smaller percentage of allocation. When this happens, one must right-size the resource to reduce the cost.  As suggested by the Guidance Framework, one must monitor resource utilization over a defined period to implement right-sizing. Frederator.ai allows users to set up a utilization goal and gain visibility of potential savings from continuously right-sizing the resource allocation. Good practice in controlling cost is setting up specific resource utilization goals to provide certain headroom for the possible unexpected workload increase and minimize the potential waste. With workload prediction, Federator.ai can facilitate continuous right-sizing of resources that meet the utilization goal.

As stated in the Guidance Framework, right-sizing is one of the most effective cost optimization best practices. In addition, Federator.ai’s automated continuous resource optimization makes matching the users’ utilization goal an easy task.

Optimizing Performance and Cost

Optimizing cloud spending goes beyond the tactical cost reduction techniques mentioned in the Reduce component of the Guidance Framework. Most public cloud service providers offer compute instances at a much lower price than the standard pay-as-you-go model. Some of those with discounted offers are from a commitment of the compute instances, such as reserved instances. If one can analyze the utilization of compute instances over time, it is possible to significantly reduce the cloud build with the use of reserved instances. The other type of cost savings can be achieved by a different type of instance: spot instances. These compute instances are preemptible, meaning the compute instances might be returned to the service providers when the availability of compute instances is low. Spot instances offer a lot lower price compared to the standard pay-as-you-go model. However, the application that runs on those spot instances needs to tolerate the possible interruption when the service providers reclaim the spot instances.

The Guidance Framework recommends that organizations should look into leveraging the preemptible instances to gain significant cost benefits if application workloads can adapt to their limitations and the risk of unavailability can be mitigated. Federator.ai analyzes the cluster usage, forecasts future resource needs, and recommends the best combination of reserved, spot, and on-demand instances that can handle the cluster workload with optimized cloud cost. Users can further apply additional search criteria such as the country/region of the instances and/or specific public cloud service providers.

Another valuable mechanism of optimizing the cloud cost without sacrificing performance recommended by the Guidance Framework is horizontal autoscaling. The modern cloud-native applications are implemented in microservice architecture that scaling the application to handle large workloads by configuring with enough microservice replicas (or pods). The cost of running an application, of course, depends on the total number of these replicas (pods) instantiated for this application. Since application workload is dynamic, it will be a waste of resources with a large number of replicas when the workload is low. Horizontal Pod Autoscaling (HPA) in Kubernetes is a great way to adjust the number of replicas for different workloads dynamically. When HPA is done right, one can achieve significant savings when running applications. Federator.ai’s intelligent HPA takes this concept further and utilizes the workload predictions to accurately increase/decrease the number of replicas just in time for the workload changes. This transforms HPA from a reactive basis to a proactive basis.

Federator.ai and the SUSE Rancher Apps and Marketplace

Managing cloud costs is an essential and challenging task for organizations using cloud services to drive their business with greater efficiency. In partnership with SUSE, Prophetstor’s Federator.ai provides an effective cloud cost management solution for customers running applications on SUSE Rancher-managed clusters. Federator.ai’s ML-based cost management implements some of the most valuable recommendations from the Guidance Framework and brings tremendous values to users adopting this framework. ProphetStor Federator.ai is currently available on SUSE Rancher Apps and Marketplace and is fully supported on both a SUSE Rancher instance as well as a Rancher open source project deployment.

For more information on Federator.ai, please visit prophetstor.com.

Ming Sheu is EVP of Product at ProphetStor with more than 25 years of experiences in networking, WiFi systems, and native cloud application. Prior to joining ProphetStor, he spent 13 years with Ruckus/CommScope in development of large scale WiFi Controller and Cloud-based network management service.

 

Harvester is now production-ready and generally available  

Tuesday, 21 December, 2021

2021 has been a memorable year for the Harvester team. In May, SUSE hosted the first virtual SUSECON, where we announced the beta release of Harvester, alongside a cast of new innovative open source projects from the SUSE Rancher engineering team. In October, for the first time in two years, we were able to meet our industry peers and the community face-to-face at KubeCon North America where we announced Harvester’s plans to integrate with our leading Kubernetes management platform SUSE Rancher.

Today, we’re closing out the year with one more major announcement – that Harvester is now production-ready and generally available for our customers and the open source community! Harvester’s highly anticipated release marks a major milestone for SUSE as it is the first brand new product release since SUSE’s acquisition of Rancher Labs and expands SUSE’s portfolio capabilities into the hyperconverged infrastructure space.

Why did SUSE build an HCI product?

This year, SUSE made a commitment to our customers and the community to help them ‘Choose Open’ and innovate across their business using open source solutions. Harvester plays an integral piece in SUSE’s portfolio as it showcases our commitment in enriching the open source landscape while providing our customers and the community valuable solutions to help them solve their infrastructure challenges.

Harvester is a natural extension to our existing strong background in container management. It takes an open, interoperable approach to hyperconverged infrastructure and addresses common challenges, including managing sprawl, siloing of teams and resource limitations faced by IT operators who need to manage modern environments comprised of both virtualized and containerized workloads.

What’s Harvester?

Harvester is a 100% free-to-use, open source modern hyperconverged infrastructure solution that is built on a foundation of cloud native solutions including Kubernetes, Longhorn and Kubevirt. It has been designed as an enterprise-ready turnkey solution that gives operators a familiar operating experience like other proprietary HCI solutions in the market.

Though built on Kubernetes, it does not require any pre-existing knowledge to operate. Its integration with SUSE Rancher gives users the ability to operate their virtualized and container workloads all within the same platform while also creating an easy, low-risk pathway for organizations looking to adopt cloud native solutions into their infrastructure modernization strategy. Learn more about the technical capabilities of Harvester in this blog by Sheng Yang, Engineering Lead for Harvester.

Image 1. Harvester as part of SUSE Rancher Console

Harvester integrates with SUSE Rancher

With today’s GA, one of the biggest milestones the Harvester engineering team has achieved this year is the integration of Harvester into the SUSE Rancher console.

As organizations look to accelerate their IT modernization journey, complexity rapidly grows as teams adopt multiple different solutions to help them manage their ever-expanding environments.  Organizations now need tools that can help them both confidently scale environments that simultaneously efficiently manages and governs their stack. Harvester and SUSE Rancher together addresses these needs by consolidating the management of operations for virtualized and containerized workloads – all accessible in a single Rancher platform instance.

This means both Harvester and Rancher clusters can be managed side by side within Rancher’s instance, reducing operators’ need to use separate solutions between the two workloads. Users can access the Harvester UI directly from within the Rancher console. In addition, Harvester clusters also have the ability to access the same features available to Rancher clusters, including authentication, role-based access control and cluster provisioning.

Another opportunity with Harvester and Rancher is that organizations who may be early in their modernization journey can use both open source solutions together as a low-risk pathway to adopting cloud native technology across their stack. Both solutions promote innovation by encouraging organizations to build their confidence in integrating modern technology to develop cloud native applications. For extra piece of mind, customers who may need an additional helping hand can have access SUSE’s support subscription available for Harvester.

Harvester’s general availability extends further than its integration with SUSE Rancher and its ability to consolidate VM and container workloads. Learn more from Robert Sirchia, Senior Technical Evangelist at SUSE, as he explores how Harvester’s cloud-native lightweight nature can be applied at the edge and also used as a platform to modernize applications.

Don’t miss the SUSE and Rancher community’s Global Online Meetup introducing Harvester on the 19th of January 2022 and 10am Pacific Time – alternatively find a local Harvester meetup near you. Learn more about Harvester here or get started today.

Migrating SAP workloads to Google Cloud requires ‘the Power of Many’

Wednesday, 8 December, 2021

Migrating essential SAP workloads to Google Cloud requires ‘the power of many’ if you want to stay ahead of rising customer expectations and fierce competition.

The Bory Castle in historic Székesfehérvár is a fantasy-like, curious work of art, well worth exploring. What’s especially unusual is how it was built in the early 20th century. Hungarian architect and sculptor Jenő Bory created the structure as a tribute to his wife and his artistic dreams. And he did it almost single-handedly. In other words, he was the architect, the project manager, the mason, landscapere and more.

It only took him 40 summers!

There’s much to admire about self-reliance in the artistic world. Still, it’s a luxury that enterprises and their partners can rarely afford with landmark tech projects and harsh business realities. Time is critical. So are costs, mistakes, and outcomes. And suppose you’re a Managed Service Provider (MSP). In that case, you also need repeatable, frictionless processes and reliable partnerships — so you can scale quickly and capitalize on lucrative opportunities.

Competitive landscape

Self-reliance sounds noble. But it could be your downfall if it becomes your mantra or creeps in by default. For example, let’s look at the increasing business opportunities around moving SAP workloads to the Google Cloud (GCP).

Migrations occur within a fiercely competitive landscape — where MSPs struggle to compete. Going it alone with an SAP migration to GCP can be a minefield. These IT environments are complex. You need expertise in SAP Basis support, SAP infrastructure, and supported combinations of operating systems, databases, and SAP tools. Then there’s the management of the multiple SAP environments – dev, test, and production – and the need to ensure these all match. It doesn’t stop there. You’ll have to manage updates within these environments and systematically create and manage highly-available systems.

Put simply, there’s almost zero margin for error from start to finish. Any delays, costs, or poor outcomes will impact your business — further eating up internal resources and throttling your ability to grow.

But it doesn’t have to be this way.

Collaboration at the core

Self-reliance is great for artistic one-off projects and flights of fancy. But it takes collaboration and ‘the power of many’ to successfully move to cloud platforms, especially when taking SAP workloads to GCP. At SUSE, we grew up with open-source, and we have collaboration at our core — because we know it’s key to dealing with complexity and creating transformation.

Our co-innovation partnership with SAP is a great example. For more than 20 years, we’ve developed and delivered SUSE and SAP solutions together, fine-tuning them to accommodate and enhance the other’s offerings. In fact, SAP uses SUSE solutions to create SAP HANA and for use in its production environments. For instance, this partnership developed innovations like live kernel patching, high availability, and deployment automation, to name a few. Our tools help partners to deploy SAP with minimal manual effort, manage mixed Linux estates from a single dashboard, and deliver strong SLAs that deepen their relationships with customers — so they can stand out from the crowd.

SUSE is the first partner chosen by Google to offer Committed Use Discounts with GCP for SAP environments. Most importantly, we build the SUSE images for GCP, certifying every release for SAP on GCP — which helps to eliminate risks to our partners and customers.

Power of Many

“The power of many’ may not have worked for Jenő Bory and his quirky castle, but it’s proving key to helping a wide range of partners migrate SAP customer solutions quickly and simply to Google Cloud.

Discover more about SUSE and how you can get help with migrating SAP workloads to GCP. Contact us at google@suse.com or visit https://www.suse.com/google/

(I originally published this piece on LinkedIn)

Kubewarden: Deep Dive into Policy Logging    

Monday, 22 November, 2021
Policies are regular programs. As such, they often need to log information. In general, we are used to making our programs log into standard output (stdout) and standard error (stderr) outputs.

However, policies run in a confined WebAssembly environment. For this mechanism to work per usual, Kubewarden would need to set up the runtime environment so the policy can write to stdout and stderr file descriptors. Upon completion, Kubewarden can check them – or stream log messages as they pop up.

Given that Kubewarden uses waPC for allowing intercommunication between the guest (the policy) and the host (Kubewarden – the policy-server or kwctl if we are running policies manually), we have extended our language SDK’s so that they can log messages by using waPC internally.

Kubewarden has defined a contract between policies (guests) and the host (Kubewarden) for performing policy settings validationpolicy validationpolicy mutation, and logging.

The waPC interface used for logging is a contract because once you have built a policy, it should be possible to run it in future Kubewarden versions. In this sense, Kubewarden keeps this contract behind the SDK of your preferred language, so you don’t have to deal with details of how logging is implemented in Kubewarden. You must use your logging library of choice for the language you are working with.

Let’s look at how to take advantage of logging in with Kubewarden in specific languages!

For Policy Authors

Go

We are going to use the Go policy template as a starting point.

Our Go SDK provides integration with the onelog library. When our policy is built for the WebAssembly target, it will send the logs to the host through waPC. Otherwise, it will just print them on stderr – but this is only relevant if you run your policy outside a Kubewarden runtime environment.

One of the first things our policy does on its main.go file is to initialize the logger:

var (
    logWriter = kubewarden.KubewardenLogWriter{}
    logger    = onelog.New(
        &logWriter,
        onelog.ALL, // shortcut for onelog.DEBUG|onelog.INFO|onelog.WARN|onelog.ERROR|onelog.FATAL
    )
)

We are then able to use onelog API to produce log messages. We could, for example, perform structured logging with debugging level:

logger.DebugWithFields("validating object", func(e onelog.Entry) {
    e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
    e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})

Or, with info level:

logger.InfoWithFields("validating object", func(e onelog.Entry) {
    e.String("name", gjson.GetBytes(payload, "request.object.metadata.name").String())
    e.String("namespace", gjson.GetBytes(payload, "request.object.metadata.namespace").String())
})

What happens under the covers is that our Go SDK sends every log event to the kubewarden host through waPC.

Rust

Let’s use the Rust policy template as our guide.

Our Rust SDK implements an integration with the slog crate. This crate exposes the concept of drains, so we have to define a global drain that we will use throughout our policy code:

use kubewarden::logging;
use slog::{o, Logger};
lazy_static! {
    static ref LOG_DRAIN: Logger = Logger::root(
        logging::KubewardenDrain::new(),
        o!("some-key" => "some-value") // This key value will be shared by all logging events that use
                                       // this logger
    );
}

Then, we can use the macros provided by slog to log on to different levels:

use slog::{crit, debug, error, info, trace, warn};

Let’s log an info-level message:

info!(
    LOG_DRAIN,
    "rejecting resource";
    "resource_name" => &resource_name
);

As with the Go SDK implementation, our Rust implementation of the slog drain sends this logging events to the host by using waPC.

You can read more about slog here.

Swift

We will be looking at the Swift policy template for this example.

As happens with Go and Rust’s SDKs, the Swift SDK is instrumented to use Swift’s LogHandler from the swift-log project, so our policy only has to initialize it. In our Sources/Policy/main.swift file:

import kubewardenSdk
import Logging

LoggingSystem.bootstrap(PolicyLogHandler.init)

Then, in our policy business logic, under Sources/BusinessLogic/validate.swift we can log with different levels:

import Logging

public func validate(payload: String) -> String {
    // ...

    logger.info("validating object",
        metadata: [
            "some-key": "some-value",
        ])

    // ...
}

Following the same strategy as the Go and Rust SDKs, the Swift SDK can push log events to the host through waPC.

For Cluster Administrators

Being able to log from within a policy is half of the story. Then, we have to be able to read and potentially collect these logs.

As we have seen, Kubewarden policies support structured logging that is then forwarded to the component running the policy. Usually, this is kwctl if you are executing the policy in a manual fashion, or policy-server if the policy is running in a Kubernetes environment.

Both kwctl and policy-server use the tracing crate to produce log events, either the events produced by the application itself or by policies running in WebAssembly runtime environments.

kwctl

The kwctl CLI tool takes a very straightforward approach to logging from policies: it will print them to the standard error file descriptor.

policy-server

The policy-server supports different log formatsjsontext and otlp.

otlp? I hear you ask. It stands for OpenTelemetry Protocol. We will look into that in a bit.

If the policy-server is run with the --log-fmt argument set to json or text, the output will be printed to the standard error file descriptor in JSON or plain text formats. These messages can be read using kubectl logs <policy-server-pod>.

If --log-fmt is set to otlp, the policy-server will use OpenTelemetry to report logs and traces.

OpenTelemetry

Kubewarden is instrumented with OpenTelemetry, so it’s possible for the policy-server to send trace events to an OpenTelemetry collector by using the OpenTelemetry Protocol (otlp).

Our official Kubewarden Helm Chart has certain values that allow you to deploy Kubewarden with OpenTelemetry support, reporting logs and traces to, for example, a Jaeger instance:

telemetry:
  enabled: True
  tracing:
    jaeger:
      endpoint: "all-in-one-collector.jaeger.svc.cluster.local:14250"

This functionality closes the gap on logging/tracing, given the freedom that the OpenTelemetry collector provides to us regarding flexibility of what to do with these logs and traces.

You can read more about Kubewarden’s integration with OpenTelemetry in our documentation.

But this is a big enough topic on its own and worth a future blog post. Stay logged!

Tags: ,, Category: Kubernetes Comments closed

Is Cloud Native Development Worth It?    

Thursday, 18 November, 2021
The ‘digital transformation’ revolution across industries enables businesses to develop and deploy applications faster and simplify the management of such applications in a cloud environment. These applications are designed to embrace new technological changes with flexibility.

The idea behind cloud native app development is to design applications that leverage the power of the cloud, take advantage of its ability to scale, and quickly recover in the event of infrastructure failure. Developers and architects are increasingly using a set of tools and design principles to support the development of modern applications that run on public, private, and hybrid cloud environments.

Cloud native applications are developed based on microservices architecture. At the core of the application’s architecture, small software modules, often known as microservices, are designed to execute different functions independently. This enables developers to make changes to a single microservice without affecting the entire application. Ultimately, this leads to a more flexible and faster application delivery adaptable to the cloud architecture.

Frequent changes and updates made to the infrastructure are possible thanks to containerization, virtualization, and several other aspects constituting the entire application development being cloud native. But the real question is, is cloud native application development worth it? Are there actual benefits achieved when enterprises adopt cloud native development strategies over the legacy technology infrastructure approach? In this article, we’ll dive deeper to compare the two.

Should  You Adopt a Cloud Native over Legacy Application Development Approach?

Cloud computing is becoming more popular among enterprises offering their technology solutions online. More tech-savvy enterprises are deploying game-changing technology solutions, and cloud native applications are helping them stay ahead of the competition. Here are some of the major feature comparisons of the two.

Speed

While customers operate in a fast-paced, innovative environment, frequent changes and improvements to the infrastructure are necessary to keep up with their expectations. To keep up with these developments, enterprises must have the proper structure and policies to conveniently improve or bring new products to market without compromising security and quality.

Applications built to embrace cloud native technology enjoy the speed at which their improvements are implemented in the production environment, thanks to the following features.

Microservices

Cloud native applications are built on microservices architecture. The application is broken down into a series of independent modules or services ,with each service consuming appropriate technology stack and data. Communication between modules is often done over APIs and message brokers.

Microservices frequently improve the code to add new features and functionality without interfering with the entire application infrastructure. Microservices’ isolated nature makes it easier for new developers in the team to comprehend the code base and make contributions faster. This approach facilitates speed and flexibility at which improvements are being made to the infrastructure. In comparison,  an infrastructure consuming the monolithic architecture would slowly see new features and enhancements being pushed to production. Monolithic applications are complex and tightly coupled, meaning slight code changes must be harmonized to avoid failures. As a result, this slows down the deployment process.

CI/CD Automation Concepts

The speed at which applications are developed, deployed, and managed has primarily been attributed to adopting Continuous Integration and Continuous Development (CI/CD).

Improvement strategies include new code changes to the infrastructure through an automated checklist in a CI/CD pipeline and testing that application standards are met before being pushed to a production environment.

When implemented on cloud native applications architecture, CI/CD streamlines the entire development and deployment phases, shortening the time in which the new features are delivered to production.

Implementing CI/CD highly improves productivity in organizations to everyone’s benefit. Automated CI/CD pipelines make deployments predictable, freeing developers from repetitive tasks to focus on higher-value tasks.

On-demand infrastructure Scaling

Enterprises should opt for cloud native architecture over traditional application development approaches to easily provision computing resources to their infrastructure on demand.

Rather than having IT support applications based on estimates of what infrastructure resources are needed, the cloud native approach promotes automated provisioning of computing resources on demand.

This approach helps applications run smoothly by continuously monitoring the health of your infrastructure for workloads that would otherwise fail.

The cloud native development approach is based on orchestration technology that provides developers insights and control to scale the infrastructure to the organization’s liking. Let’s look at how the following features help achieve infrastructure scaling.

Containerization

Cloud native applications are built based on container technology where microservices, operating system libraries, and dependencies are bundled together to create single lightweight executables called container images.

These container images are stored in an online registry catalog for easy access by the runtime environment and developers making updates on them.

Microservices deployed as containers should be able to scale in and out, depending on the load spikes.

Containerization promotes portability by ensuring the executable packaging is uniform and runs consistently across the developer’s local and deployment environments.

Orchestration

Let’s talk orchestration in cloud native application development. Orchestration automates deploying, managing, and scaling microservice-based applications in containers.

Container orchestration tools communicate with user-created schedules (YAML, JSON files) to describe the desired state of your application. Once your application is deployed, the orchestration tool uses the defined specifications to manage the container throughout its lifecycle.

Auto-Scaling

Automating cloud native workflows ensures that the infrastructure automatically self-provisions itself when in need of resources. Health checks and auto-healing features are implemented in the infrastructure when under development to ensure that the infrastructure runs smoothly without manual intervention.

You are less likely to encounter service downtime because of this. Your infrastructure is automatically set to auto-detect an increase in workloads that would otherwise result in failure and automatically scales to a working machine.

Optimized Cost of Operation

Developing cloud native applications eliminates the need for hardware data centers that would otherwise sit idle at any given point. The cloud native architecture enables a pay-per-use service model where organizations only pay for the services they need to support their infrastructure.

Opting for a cloud native approach over a traditional legacy system optimizes the cost incurred that would otherwise go toward maintenance. These costs appear in areas such as scheduled security improvements, database maintenance, and managing frequent downtimes. This usually becomes a burden for the IT department and can be partially solved by migrating to the cloud.

Applications developed to leverage the cloud result in optimized costs allocated to infrastructure management while maximizing efficiency.

Ease of Management

Cloud native service providers have built-in features to manage and monitor your infrastructure effortlessly. A good example, in this case, is serverless platforms like AWS Lambda and  Azure Functions. These platforms help developers manage their workflows by providing an execution environment and managing the infrastructure’s dependencies.

This gets rid of uncertainty in dependencies version and configuration settings required to run the infrastructure. Developing applications that run on legacy systems requires developers to update and maintain the dependencies manually. Eventually, this becomes a complicated practice with no consistency. Instead, the cloud native approach makes collaborating easier without having the “This application works on my system but fails on another machine ” discussion.

Also, since the application is divided into smaller, manageable microservices, developers can easily focus on specific units without worrying about interactions between them.

Challenges

Unfortunately, there are challenges to ramping up users to adopt the new technology, especially for enterprises with long-standing legacy applications. This is often a result of infrastructure differences and complexities faced when trying to implement cloud solutions.

A perfect example to visualize this challenge would be assigning admin roles in Azure VMware solutions. The CloudAdmin role would typically create and manage workloads in your cloud, while in an Azure VMware Solution, the cloud admin role has privileges that conflict with the VMware cloud solutions and on-premises.

It is important to note that in the Azure VMware solution, the cloud admin does not have access to the administrator user account. This revokes the permission roles to add identity sources like on-premises servers to vCenter, making infrastructure role management complex.

Conclusion

Legacy vs. Cloud Native Application Development: What’s Best?

While legacy application development has always been the standard baseline structure of how applications are developed and maintained, the surge in computing demands pushed for the disruption of platforms to handle this better.

More enterprises are now adopting the cloud native structure that focuses on infrastructure improvement to maximize its full potential. Cloud native at scale is a growing trend that strives to reshape the core structure of how applications should be developed.

Cloud native application development should be adopted over the legacy structure to embrace growing technology trends.

Are you struggling with building applications for the cloud?  Watch our 4-week On Demand Academy class, Accelerate Dev Workloads. You’ll learn how to develop cloud native applications easier and faster.

Introduction to Cloud Native Application Architecture    

Wednesday, 17 November, 2021
Today, it is crucial that an organization’s application’s scalability matches its growth tempo. If you want your client’s app to be robust and easy to scale, you have to make the right architectural decisions.

Cloud native applications are proven more efficient than their traditional counterparts and much easier to scale due to containerization and running in the cloud.

In this blog, we’ll talk about what cloud native applications are and what benefits this architecture brings to real projects.

What is Cloud Native Application Architecture?

Cloud native is an approach to building and running apps that use the cloud. In layman’s terms, companies that use cloud native architecture are more likely to create new ideas, understand market trends and respond faster to their customers’ requests.

Cloud native applications are tied to the underlying infrastructure needed to support them. Today, this means deploying microservices through containers to dynamically provision resources according to user needs.

Each microservice can independently receive and transmit data through the service-level APIs. Although not required for an application to be considered “cloud native” due to modularity, portability, and granular resource management, microservices are a natural fit for running applications in the cloud.

Scheme of Cloud Native Application

Cloud native application architecture consists of frontend and backend. 

  • The client-side or frontend is the application interface available for the end-user. It has protocols and ports configured for user-database access and interaction. An example of this is a web browser. 
  • The server-side or backend refers to the cloud itself. It consists of resources providing cloud computing services. It includes everything you need, like data storage, security, and virtual machines.

All applications hosted on the backend cloud server are protected due to built-in engine security, traffic management, and protocols. These protocols are intermediaries, or middleware, for establishing successful communication with each other.

What Are the Core Design Principles of Cloud Native Architecture?

To create and use cloud native applications, organizations need to rethink the approach to the development system and implement the fundamental principles of cloud native.

DevOps

DevOps is a cultural framework and environment in which software is created, tested, and released faster, more frequently, and consistently. DevOps practices allow developers to shorten software development cycles without compromising on quality.

CI/CD

Continuous integration (CI) is the automation of code change integration when numerous contributions are made to the same project. CI is considered one of the main best practices of DevOps culture because it allows developers to merge code more frequently into the central repository, where they are subject to builds and tests.

Continuous delivery (CD) is the process of constantly releasing updates, often through automated delivery. Continuous delivery makes the software release process reliable, and organizations can quickly deliver individual updates, features, or entire products.

Microservices

Microservices are an architectural approach to developing an application as a collection of small services; each service implements a business opportunity, starts its process, and communicates through its own API.

Each microservice can be deployed, upgraded, scaled, and restarted independently of other services in the same application, usually as part of an automated system, allowing frequent updates to live applications without impacting customers.

Containerization

Containerization is a software virtualization technique conducted at the operating system level and ensures the minimum use of resources required for the application’s launch.

Using virtualization at the operating system level, a single OS instance is dynamically partitioned into one or more isolated containers, each with a unique writeable file system and resource quota.

The low overhead of creating and deleting containers and the high packing density in a single VM make containers an ideal computational tool for deploying individual microservices.

Benefits of Cloud Native Architecture

Cloud native applications are built and deployed quickly by small teams of experts on platforms that provide easy scalability and hardware decoupling. This approach provides organizations greater flexibility, resiliency, and portability in cloud environments.

Strong Competitive Advantage

Cloud-based development is a transition to a new competitive environment with many convenient tools, no capital investment, and the ability to manage resources in minutes. Companies that can quickly create and deliver software to meet customer needs are more successful in the software age.

Increased Resilience

Cloud native development allows you to focus on resilience tools. The rapidly evolving cloud landscape helps developers and architects design systems that remain interactive regardless of environment freezes.

Improved Flexibility

Cloud systems allow you to quickly and efficiently manage the resources required to develop applications. Implementing a hybrid or multi-cloud environment will enable developers to use different infrastructures to meet business needs.

Streamlined Automation and Transformation

The automation of IT management inside the enterprise is a springboard for the effective transformation of other departments and teams.

In addition, it eliminates the risk of disruption due to human error as employees focus on controlling routine tasks rather than performing them directly.

Automated real-time patches and updates across all stack levels eliminate downtime and the need for operational experts with “manual management” expertise.

Comparison: Cloud Native Architecture vs. Legacy Architecture

The capabilities of the cloud allow both traditional monolithic applications and data operations to be transferred to it. However, many enterprises prefer to invest in a cloud native architecture from the start. Here is why:

Separation of Computation and Data Storage Improves Scalability

Datacenter servers are usually connected to direct-attached storage (DAS), which an enterprise can use to store temporary files, images, documents, or other purposes.

Relying on this model is dangerous because its processing power needs can rise and fall in very different ways than storage needs. The cloud enables object storage such as AWS S3 or ADLS, which can be purchased, optimized, and managed separately from computing requirements.

This way, you can easily add thousands of new users or expand the app’s functionality.

Cloud Object Storage Gives Better Adaptability

Cloud providers are under competitive pressure to improve and innovate in their storage services. Application architects who monitor closely and quickly adapt to these innovations will have an edge over competitors who have taken a wait-and-see attitude.

Alongside proprietary solutions, there are also many open source, cloud computing software projects like Rancher.

This container management platform provides users with a complete software stack that facilitates Kubernetes cluster management in a private or public cloud.

Cloud Native Architecture is More Reliable

The obvious advantage for those companies that have adopted a native cloud approach is the focus on agility, automation, and simplification.

For complex IT or business functions, their survival depends on the level of elaboration of their services. On the other hand, you need error protection to improve user productivity through increased levels of automation, built-in predictive intelligence, or machine learning to help keep your environment running optimally.

Cloud Native Architecture Makes Inter-Cloud Migration Easy

Every cloud provider has its cloud services (e.g., data warehousing, ETL, messaging) and provides a rich set of ready-made open source tools such as Spark, Kafka, MySQL, and many others.

While it sounds bold to say that using open source solutions makes it easy to move from one cloud to another, if cloud providers offer migration options, you won’t have to rewrite a significant part of the existing functionality.

Moreover, many IT architects see the future in the multi-cloud model, as many companies already deal with two or more cloud providers.

If your organization can skillfully use cloud services from different vendors, then the ability to determine the advantage of one cloud over another is good groundwork for the future justification of your decision.

Conclusion

Cloud native application architecture provides many benefits. This approach automates and integrates the concepts of continuous delivery, microservices, and containers for enhanced quality and streamlined delivery.

Applications that are built as cloud native can offer virtually unlimited computing power on demand. That’s why more and more developers today are choosing to build and run their applications as cloud native.

Want to make sure you don’t miss any of the action? Join the SUSE & Rancher Community to get updates on new content coming your way!

Pour survivre, le secteur de l’énergie a besoin du bon type de transformation

Tuesday, 16 November, 2021

Avec le passage à un avenir à faible empreinte carbone, le secteur de l’énergie ne peut pas continuer ses activités habituelles. Le changement climatique exige une transformation radicale, dans laquelle la numérisation va jouer un rôle important. Mais comment les entreprises doivent-elles aborder cette transformation, alors qu’elles n’y ont perçu aucun avantage jusqu’à présent ?
Les marchés de l’énergie sont de moins en moins prévisibles. Avec une concurrence plus féroce, une complexité accrue et une surveillance réglementaire renforcée, l’évolution technologique est aujourd’hui essentielle pour améliorer l’efficacité et répondre aux besoins supérieurs de capacité supérieure de demain.
Le Department for Business, Energy and Industrial Strategy (Département des Affaires, de l’Énergie et des Stratégies Industrielles) du Royaume-Uni estime que d’ici 2050, l’amélioration de la flexibilité grâce à la numérisation pourrait entraîner une réduction des coûts globaux allant jusqu’à 14 milliards de dollars par an.
Les bénéfices sont sous pression dans l’ensemble du secteur, la marge d’erreur diminue donc rapidement. La survie d’une entreprise dépend probablement de sa capacité à adopter la numérisation.
McKinsey explique : « Les entreprises du secteur de l’énergie n’ont pas réussi à générer une grande valeur commerciale avec le numérique, car leurs approches ne tiennent pas compte des défis uniques qu’elles rencontrent, ce qui crée une inertie extraordinaire ».
« La rupture de cette inertie exige des mesures beaucoup plus audacieuses par rapport à celles que les entreprises du secteur de l’énergie ont pu prendre jusqu’à ce jour. Elles doivent s’engager dans la transformation. »
L’engagement est une chose, mais savoir quoi transformer est tout aussi important. Que doivent donc faire les entreprises du secteur de l’énergie pour réduire les coûts d’exploitation et prendre de l’avance dans la course mondiale à l’électrification ?
L’analyse des données est fondamentale. Comprendre les clients permet de créer des produits et des services qui répondent à la demande. L’adoption accélérée nécessite des partenariats autour de nouveaux écosystèmes qui facilitent l’échange de services, de données et d’informations grâce à l’utilisation optimisée des technologies Open Source.
Les entreprises du secteur de l’énergie doivent également acquérir de nouvelles compétences et de nouveaux savoir-faire. Cela implique de développer un pipeline de talents basé sur de nouvelles applications technologiques passionnantes.
Une fois ces éléments en place, l’inertie évoquée par McKinsey peut être remplacée par la numérisation de la production, de l’approvisionnement, de la maintenance et de l’interaction avec les clients. Les fournisseurs peuvent vraiment se centrer sur le client, et conserver et construire facilement des bases de clients fidèles avec des possibilités de personnalisation. Le secteur de l’énergie se définit de moins en moins par le déplacement de carburants, ce qui permet d’orienter les investissements vers de nouveaux cas d’utilisation opérationnels pour favoriser l’optimisation et l’efficacité.
Les solutions d’intelligence artificielle constituent la dernière pièce du puzzle. Elles permettent de s’y retrouver dans la complexité croissante d’un marché inondé par les carburants alternatifs, les sources d’énergie distribuées et une demande aléatoire, et elles assurent un bon niveau de préparation aux sinistres grâce à des fonctionnalités telles que la modélisation des événements météorologiques. Selon PwC, l’intégration de technologies numériques telles que l’IA dans le secteur de l’énergie pourrait augmenter le PIB mondial de 5,2 milliards et entraîner une réduction allant jusqu’à 4 % des émissions mondiales de carbone.
Pourtant, tout cela repose en grande partie sur la mise à disposition de solutions par des partenaires spécialisés en logiciels Open Source, pour obtenir une meilleure agilité, créer des outils et répondre à la demande d’un avenir « prosommateur ». En gérant tous les périphériques en Edge et intégrés depuis un emplacement unique, les entreprises du secteur de l’énergie seront en mesure d’offrir une cohérence, des performances, une fiabilité et une sécurité accrues.
Cela implique l’intégration de fonctionnalités Cloud pour prendre en charge les environnements de travail hybrides. Les clients profitent d’une expérience client améliorée et d’une meilleure préparation aux situations d’urgence. Les fonctionnalités d’IA du Edge permettent de générer des rapports plus rapidement, d’automatiser les flux de travail et de réduire le délai de rentabilisation pour les problèmes critiques.
En tant que leader en matière de solutions intelligentes d’edge computing, SUSE est parfaitement placé pour favoriser l’informatique en cloud hybride, la transformation cloud native et l’utilisation de SAP.
SUSE offre le système d’exploitation Linux le plus adaptable et la seule plate-forme ouverte de gestion Kubernetes. Grâce à nos solutions de Edge, vous pouvez mener votre transformation selon vos propres priorités, dans tous les types d’environnement : multi-cloud, sur site ou cloud hybride.
Si vous souhaitez discuter avec l’un de nos experts ou découvrir comment SUSE peut vous aider à réussir votre transformation numérique sur le long terme, n’hésitez pas à nous contacter.

Refactoring Isn’t the Same for All    

Tuesday, 9 November, 2021

Cloud Native: it’s been an industry buzzword for a few years now. It holds different meanings for different people, and even then a different context. While we have overused this word, it does have a place when it comes to modernizing applications.

To set the context here, we are talking about apps you would build in the cloud rather than for it. This means these apps, if modernized, would run in a cloud platform. In this post, we will discuss how “refactoring,” as Gartner puts it, isn’t the same for every app.

When we look at legacy applications sitting in data centers across the globe, some are traditional mainframes; others are “Custom off the Shelf Software” (CotS). We care about the business-critical apps we can leverage for the cloud. Some of these are CotS, and many of these applications are custom.

When it comes to the CotS, companies should rely on the vendor to modernize their CotS to a cloud platform. This is the vendor’s role, and there is little business value in a company doing it for them.

Gartner came up with the five R’s: Rehost, Refactor, Revise, Rebuild and Replace. But when we look at refactoring, it shouldn’t be the same for every app because not all apps are the same. Some are mission-critical; most of your company’s revenue is made with those apps. Some apps are used once a month to make accounting’s life easier. Both might need to be refactored, but not to the same level. When you refactor, you change the structure, architecture, and business logic. All to leverage core concepts and features of a cloud. This is why we break down refactoring into Scale of Cloud Native.

Custom apps are perfect candidates for modernization. With every custom app, modernization brings risks and rewards. Most systems depend on other technologies like libraries, subsystems, and even frameworks. Some of these dependencies are easy to modernize into a cloud platform, but not all are like this. Some pose considerable challenges that limit how much you can modernize.

If we look at what makes an app cloud native, we first have to acknowledge that this term means something different depending on who you ask; however, most of these concepts are at least somewhat universal. Some of these concepts are:

  • Configuration
  • Disposability
  • Isolation
  • Scalability
  • Logs

Outside of technical limitations, there’s the question of how much an application should be modernized. Do you go all in and rewrite an app to be fully cloud native? Or do you do the bare minimum to get the app to run in the cloud?

We delineate these levels of cloud native as Suitable, Compatible, Durable, and Native. These concepts build upon one another so that an app can be Compatible and, with some refactoring, can go to Durable.

What does all this actually mean? Well, let’s break them down based on a scale:

  • Suitable – First on the scale and the bare minimum you need to get your app running in your cloud platform. This could just be the containerization of the application, or that and a little more.
  • Compatible – Leveraging a few of the core concepts of the cloud. An app that is cloud-compatible leverages things like environmental configs and disposability. This is a step further than Suitable.
  • Durable – At this point, apps should be able to handle a failure in the system and not let it cascade, meaning the app can handle it when some underlying services are unavailable. Being Durable also means the app can start up fast and shut down gracefully. These apps are well beyond Suitable and Compatible.
  • Native – These apps leverage most, if not all, of the cloud native core concepts. Generally, this is done with brand-new apps being written in the cloud. It might not make sense to modernize an existing app to this level.

This scale isn’t absolute; as such, different organizations may use different scales. A scale is important to ensure you are not over or under-modernizing an app.

When starting any modernization effort, collectively set the scale. This should be done organizationally rather than team-by-team. When it comes to budget and timing, making sure that all teams use the same scale is critical.

Learn more about this in our Webinar, App Modernization: When and How Far to Modernize. Watch the replay, Register here. 

Want to make sure you don’t miss any of the action? Join the SUSE & Rancher Community to get updates on new content coming your way!

A Complete Guide to Integrating SUSE Rancher with vSphere using Terraform on phoenixNAP

Monday, 1 November, 2021

SUSE One Partner, phoenixNAP, is a global Infrastructure as a Service (IaaS) provider with 15+ data centers and PoPs across six continents. With a goal to commoditize enterprise-grade technology and make it accessible to organizations of different sizes, phoenixNAP supports deployment of SUSE Rancher.

As phoenixNAP customer, Glimpse, had chosen a containerized infrastructure path, they created a container-ready infrastructure through integrating various tools like VMware vSphere, HAProxy and more with SUSE Rancher on the phoenixNAP Managed Private Cloud platform. Glimpse developed a deployment guide based on their methodology and we’ve invited phoenixNAP to author a guest blog so you can benefit from their learnings. Cool stuff!  ~Bret

SUSE GUEST BLOG AUTHORED BY:
Bojana Dobran, Product Marketing Manager at phoenixNAP

A Complete Guide to Integrating Rancher with vSphere using Terraform

Container enablement has become a priority for teams looking to accelerate software delivery timeframes. By ensuring environment consistency and increased portability, containers enable developers to move applications faster, spend less time managing infrastructure, and save on production environment costs.

The trend of massive container adoption is also present in enterprise, where most business-critical workloads run on VMware vSphere. One way to containerize such applications is to use SUSE Rancher, the only container management platform allowing for Kubernetes deployment on any infrastructure. SUSE Rancher eliminates the need for building a custom container services platform and provides organizations with a critical capability to modernize their IT.

A detailed use case for deploying Kubernetes clusters on a vSphere-based environment using SUSE Rancher is provided by developers from Glimpse, an online membership platform, running on phoenixNAP’s Managed Private Cloud (MPC).

Background on Glimpse and phoenixNAP

As a fast-growing online subscription business platform, Glimpse has used phoenixNAP’s VMware-based Managed Private Cloud (MPC) solution for several years. In its effort to adopt DevOps tools and methodologies, Glimpse was looking to deploy Kubernetes on MPC and containerize its production workloads. To achieve that, they used SUSE Rancher in combination with Terraform tools.

The full process of integrating SUSE Rancher and vSphere is documented in this guide and on phoenixNAP’s GitHub. Below is a summary of what the guide includes.

Integrating SUSE Rancher and vSphere

Enabling integration between VMware vSphere and SUSE Rancher is a multi-step process. The Glimpse team used the following tools to enable the integration:

  • VMware vSphere for infrastructure and network management
  • HAProxy for load balancing
  • Hashicorp Packer for golden image creation
  • Hashicorp Terraform for SUSE Rancher integration
  • SUSE Rancher for Kubernetes deployment

While some integration steps were relatively straightforward, others required a certain degree of customization. The first step was to allow SUSE Rancher and Terraform to access the existing vSphere environment, which required creation of dedicated users and network. DHCP in vSphere needed to be temporarily enabled to allow Hashicorp Packer builds to get an IP address, but the template should be shut down as soon as the build is completed. A recommended best practice for this step is to create a separate folder in vSphere for Rancher and Kubernetes files. In addition to this, Network Policy Profiles also need to be specified for security reasons.

Below is a table with initial requirements for the SUSE Rancher and vSphere integration on MPC.

Building a Golden Image with Hashicorp Packer

The next step to enabling SUSE Rancher and vSphere integration is creation of a golden image. Glimpse team chose to do this using Hashicorp Packer as it enables programmatic creation of golden images and continuous deployment.

The image can be created using the available template builders and installed using provisioners. Resource configuration details can be defined through cloud-init, so that the image can be deployed with pre-defined dependencies automatically.

The complete guide also contains the sript.sh file with different workarounds that need to be implemented for Rancher.

Provisioning Rancher Using Terraform

Glimpse used HashiCorp Terraform to provision a highly available Rancher cluster. The main.tf file in their example contains Rancher configuration such as information about providers, templates, provisioners, and vSphere environment, so it can be bootstrapped automatically.

The steps needed for provisioning include collecting essential vSphere data sources, generating templates, and creating a load balancer.

Creating a Kubernetes Cluster using Rancher

Once Rancher has been provisioned, the next step is to spin up a Kubernetes cluster using a Rancher machine. This can be done programmatically via HashiCorp Terraform, as demonstrated by the Glimpse team on MPC.

In this example, they used backend.tf file to store the IP address of Consul. As an alternative to Consul, local storage can be used. The specific steps involve:

  • Creating API access keys for the Rancher module
  • Defining paths and templates in variables.tf file
  • Deploying master and worker nodes
  • Creating node templates

Once all the steps are completed, a Kubernetes cluster will be deployed and ready to be managed. The cluster can be easily managed through Rancher UI, which offers intuitive options for node management. In the Cluster Explorer option in SUSE Rancher, all the cluster information including time of creation, the number of resources, namespaces, etc., will be immediately visible.

With all your clusters in a single-pane-of-glass view, SUSE Rancher is making it possible to offload complex infrastructure management tasks, regardless of the platform you are using. This example of integration with vSphere is intended to help save time on containerizing your own vSphere-based applications.

For full guide, visit phoenixNAP website and download your free copy.

The code for this integration is available on phoenixNAP’s GitHub.

phoenixNAP also provides a solution for simplified deployment of physical servers with pre-installed SUSE Rancher software within its Bare Metal Cloud platform. Enabling automated provisioning of dedicated servers through API, CLI, or Infrastructure as Code tools, Bare Metal Cloud helps automation-driven organizations and DevOps teams simplify their infrastructure management tasks.

As a Product Marketing Manager at phoenixNAP, Bojana helps develop and document use cases for the company’s infrastructure solutions. Her extensive experience in technical and marketing writing helps her present complex concepts the simple way.