Speedscale for SUSE Rancher: accelerate cloud native application testing

Wednesday, 23 March, 2022

SUSE GUEST BLOG ARTICLE AUTHORED BY:

Ken Ahrens, CEO and Founder, Speedscale

Speedscale for SUSE Rancher helps you modernize the way you develop, test and deploy your cloud native application landscape so you can accelerate your application development life cycle and gain confidence in your software releases. ~ Terry

 

SUSE Rancher | Speedscale cloud native testing for accelerated DevOps

 

Continuous Kubernetes testing with traffic replay

Running Kubernetes at scale is really hard. Developing microservices apps that run well in a Kubernetes environment takes complexity to the next level. SUSE Rancher makes Kubernetes easier to use with a point and click web interface that simplifies the process of scaling out and managing workloads across all of your clusters – from core to cloud to edge. Combining Speedscale with SUSE Rancher gives development teams visibility into microservices to help them improve service performance and quality. By implementing traffic replay as part of continuous integration, development teams can release with confidence. 

Traditional approaches to software testing are not keeping up with the trend of “continuous everything.”  According to a recent GitLab survey of developers, testing was the slowest phase of application development. This causes a gap where code is ready to be delivered to production, but teams must slow these releases with canary deployments and feature flags to ensure new changes don’t break production. 

Testing in production is a great capability, but not applicable for every release. Teams must maximize the benefits of the quality feedback from production without negatively impacting users.

 

Pros of testing in production:

  • Testing is based on real user patterns, so teams don’t have to “guess” how the app will behave in production.
  • It delivers high quality signals from the production environment, like the SRE golden signals of latency, throughput, error rate, etc.
  • It reduces the need to stand up large non-production environments for testing.

 

Cons of treating users as test subjects:

  • Code bugs can impact user experience, or in some cases cause a cascading outage.
  • Rolling back a change might corrupt data in downstream systems that will also need to be rolled back.
  • Large scale testing in production often must be limited to a small number of variations, due to the significant effort to manage and retire feature flags.

 

Continuous testing within the CI/CD pipeline enables “shifting left,” which lets teams understand the quality of new code before it impacts customers. The combination of SUSE Rancher and Speedscale lets teams use a GitOps workflow to validate new code before it ever reaches the production environment. 

Speedscale provides traffic replay capabilities that help developers discover API performance and contract issues earlier in their release cycle. Users can collect, sanitize and replay API traffic, simulate load or chaos, and measure latency, throughput, saturation and errors before the code is released. 

 

The Speedscale Operator is easily installed from the SUSE Rancher Apps & Marketplace. 

SUSE Rancher Apps & Marketplace: Speedscale Operator for a modern approach to cloud native testing

Use the operator to deploy the Speedscale control plane to any workload in your SUSE Rancher landscape to capture traffic into and out of the microservices.  Then, use this data to run an isolation test environment for your application These traffic replays can be easily created for each microservice workload in the cluster, enabling robust testing across numerous scenarios.

 

Use Speedscale’s traffic viewer to understand all the inbound and outbound calls for a given microservice. This helps you see how an API is actually called and automatically discovers all back-end dependencies.

Speedscale: discover and test with back-end dependencies

 

Drill down even further to get the full-fidelity details for a particular transaction including the headers, payload and complete request and response. This kind of data is tremendously valuable to debug exactly how a specific call is being made.

Speedscale: transaction details for robust testing

 

After creating a snapshot from the specific data desired, you can easily replay it in another cluster in your SUSE Rancher landscape.  The result of the replay is a report that helps you understand how each microservice behaves under realistic load conditions. View the testing results report to identify and isolate latency, throughput, memory and other issues.

Speedscale: testing replay report

 

Speedscale for SUSE Rancher provides you with the tools you need to rapidly stress test your cloud native applications with real-world scenarios, gain confidence in your application releases and accelerate innovation.

 

 

Missed our SUSE One Partner Solutions Showcase, “Accelerate and simplify cloud native application testing in your SUSE Rancher landscape with Speedscale”?  Catch it on-demand.

 

 

Additional resources 

 


 

SUSE One Partner Solution Stacks

SUSE One Partner Solution Stacks are featured co-innovations by SUSE and partners that are designed to arm organizations with capabilities and agility to overcome challenges and accelerate success. 

The SUSE One Partner Solution Stacks framework is open to partners of all specializations and tiers, and streamlines how we work together to empower our customers. 

Partners can learn more in the SUSE One Partner Portal or by contacting your SUSE alliance team to explore how you can create, adopt and leverage SUSE One Partner Solution Stacks to grow your business. 

 

 

 

Top 3 Things To Know in 2022 About What’s New for SAP Customers on AWS

Tuesday, 8 March, 2022

This actually should have been one of those beginning of the New Year blogs or even better published in Dec following reInvent. The topic summarizes all the oodles of goodness AWS announced that are highly relevant for any SAP customer. But for various reasons – it got delayed until now. But it’s chock-full of really good information that will definitely enrich your Tuesday, or even the whole year! So read on, 🙂

First of all, this is actually a guest blog, written by our close colleagues at AWS. Let me introduce David Rocha – Sr. Partner Solutions Architect and Soumya Das – Partner Solutions Architect. David’s specialty is SUSE and everything Linux. Soumya’s specialty is everything SAP.

So without further ado, here’s their blog:

New AWS features and services for SAP on AWS customers

In the words of Amazon CEO Andy Jassy, ‘there is no compression algorithm for experience’.

Since 2010, customers have benefited from both AWS and SUSE providing products and services that give enterprise customers fast, flexible access to the cloud for a variety of workloads. SUSE first published SUSE Linux Enterprise Server (SLES) on Amazon Web Services (AWS) in 2010 and SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) on AWS in 2016.

The workloads have evolved in the last 16 years and includes mission critical applications like SAP. SAP applications represent the financial system of record and business process backbone for most of the world’s enterprises. SAP states that 77% of the world’s transaction revenue touches an SAP system and 91% of the Forbes Global 2000 are SAP customers. AWS and SUSE supported applications include SAP ERP Central Component (SAP ECC), SAP HANA Database, SAP S/4HANA and all its associated platforms like SAP S/4 Central Finance, SAP MDG(Master Data Governance), or SAP CAR(Customer Activity Repository).

AWS is the choice for 5000+ SAP customers and hundreds of partners. Whether you want to lift and shift your existing SAP workloads to reduce costs, re-factor your SAP ECC to SAP S/4HANA, or innovate and modernize with AWS services, you can count on AWS’s unmatched experience, infrastructure, and platform breadth to get more value out of your SAP investments.

Let’s dive into some of the new announcements for 2021 – 2022:

New Instances

A number of new instances were released for SAP workloads including recent additions that were announced during re:Invent 2021 [1]:

  • Amazon Elastic Compute Cloud R6i instances and Amazon EC2 M6i instances are powered by the latest generation Intel Xeon Scalable processors (code-named Ice Lake). The newer instance families provide up to 15% better compute price performance, up to 20% higher memory bandwidth per vCPU, up to 50GBps networking speed and up to 40 GBPS of bandwidth to the Amazon EBS compared to the previous generation instances. [2]
  • AWS recently announced the Amazon EC2 M6a are powered by 3rd generation AMD EPYC (code named Milan) which is the latest AMD processor certified to run SAP workloads. [3]
  • Amazon EC2 X2idn and X2iedn instances are SAP-Certified and are a great fit for workloads such as small-to large-scale traditional and in-memory databases,and analytics.  Amazon EC2 X2idn and X2iedn instances are powered by 3rd generation Intel Xeon Scalable processors with an all-core turbo frequency up to 3.5 GHz and deliver up to 50% higher compute price performance than comparable Amazon EC2 X1 instances. Additionally, these instances provide 45% more SAPS than comparable Amazon EC2 X1 instances. [4]
  • Four new EC2 High Memory instances were launched in May 2021. The new EC2 High Memory instances with 6TB, 9TB, and 12TB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge, u-9tb1.112xlarge, and u-12tb1.112xlarge) are available for usage with On-Demand (OD) and Savings Plan purchase options. This launch gives customers greater flexibility for instance usage and procurement.with up to 12TB of memory are now available with On-Demand and Savings Plan purchase options. [5]

New Tooling

AWS Launch Wizard now supports SAP S/4HANA 2021, SAP BW/4HANA 2021, and SAP HANA SPS06 deployments running on SUSE Linux Enterprise Server (SLES) and SUSE Linux Enterprise Server for SAP 15 SP3 (SLES for SAP). AWS Launch Wizard offers customers and partners a way of sizing, configuring, and deploying AWS resources for SAP S/4HANA or SAP BW/4HANA. This launch makes it easy for customers to deploy and scale these applications on the latest SUSE Linux versions in accordance with AWS, SAP, and SUSE best practices. [6]

AWS announced CloudWatch Application Insights for SAP HANA installed on SLES and SLES for SAP 15 or later. This allows CloudWatch Application Insights to analyze metric patterns using historical data to detect anomalies, and continuously track errors and exceptions from SAP HANA, operating systems, and infrastructure logs. This service creates dashboards that show the observations and problem severity information to help you prioritize your actions. For common problems in SAP HANA database, it provides additional insights to determine root cause and steps for resolution. It also sets up dynamic alarms on monitored metrics which are automatically updated based on anomalies detected on historical data. [7]

New version of Amazon Inspector automates vulnerability management at the Amazon EC2 instance level. For SAP workloads, Amazon Inspector supports operating systems like SLES and SLES for SAP. It delivers real-time findings by using AWS Systems Manager agent (SSM). [8]

Other Cool Stuff

Amazon Elastic File System (EFS) native replication can now be leveraged to automatically maintain copies of your Amazon EFS file systems for business continuity. It helps you to meet compliance requirements as part of your disaster recovery strategy. You can set this up in minutes for new or existing Amazon EFS file systems, with replication either within a single AWS region or between two AWS regions. [9]

SAP Lens for AWS Well Architected Framework can help customers make design decisions when architecting SAP on AWS. The SAP Lens is a collection of design principles and best practices and by utilizing it customers and partners can evaluate their SAP workloads against this best practices and principles. That makes sure that they are designed to align with the recommendations provided by SAP and AWS. In essence, SAP Lens is organized in 5 AWS Well-Architected pillars – Security, Reliability, Performance efficiency, cost optimization and Operational Excellence. [10]

AWS Training & Certification will launch an AWS Certified SAP on AWS Specialty (PAS-C01) in April 2022. This is a new AWS specialty certification that validates advanced technical skills and experience to design, implement, migrate, and operate SAP workloads optimally on AWS. [11]

Conclusion:

AWS and SUSE continue to provide products and services that give enterprise customers fast, flexible access to AWS for SAP applications. The announcements listed in the blog are not an all encompassing list. We recommend that you visit the links for each of the announcement to learn more about the features and benefits. Additionally, AWS has a blog [12] category that focuses on SAP applications that you can follow to learn more about recent launches, as well as blogs that cover new design recommendations and service integrations.

We’re looking forward to 2022 and the new product and services that will continue to focus on your success!

 

ElectronicPartner collaborates with SUSE to optimize cost and performance, whilst meeting customer needs

Monday, 7 March, 2022

“SUSE Linux simply runs without any issues and keeps our business going from strength to strength.” Matthias Assmann, CIO, ElectronicPartner Handel SE.

As one of Europe’s leading consumer electronics buying groups, ElectronicPartner Handel SE relies on efficiency, agility and smooth logistics to win and retain price-sensitive customers.

Against the backdrop of the Covid-19 pandemic ElectronicPartner’s business imperative was to compete on price but also to protect its profit margins. To do this required a focus on efficiency and supply chain management optimization, which prompted a move to SAP S/4HANA.

SUSE Linux Enterprise Server (SLES) for SAP Applications was selected as the platform to support the move from SAP ERP to SAP S/4HANA. This decision was based on it offering the stability and performance needed to support finance, logistics, supply chain and partner management processes and its ability to use and reassign server resources dynamically during and after the migration. It was also a cost-effective option that provided the perfect foundation for mission-critical SAP applications on IBM Power. The fast, high-quality support provided by SUSE and IBM clinched the deal.

Optimizing cost and performance

Thanks to SAP S/4HANA on SLES for SAP Applications and IBM Power Servers, ElectronicPartner has benefited from improved staff productivity, with key tasks being completed in less than half the time that they were previously.

The new solution unlocks insights through instant analytics on 90 million data records, allowing favorable negotiation of prices and volumes with suppliers and creating competitive advantage.

Post-deployment the infrastructure has delivered cost efficiency through its ability to better balance workload and compute resources as well as ensuring reliability, given that SLES for SAP Applications is optimized for SAP software.

The improved scalability of the solution helps ElectronicPartner to continuously optimize supply chains, manage partner relationships effectively, and streamline logistics to offer consumers low prices across a wide range of products.

Building on the solid foundation of SLES for SAP Applications, IBM Power and SAP S/4HANA, ElectronicPartner already has plans to transform more core business processes to unlock further efficiencies, increase availability, enhance flexibility and improve security.

Click here to find out more about how ElectronicPartner optimizes cost and performance, whilst meeting customer needs in a competitive trading environment.

The Public Cloud Is Ideal for Your SAP S/4HANA Environment. Here’s Why.

Friday, 4 March, 2022

The global pandemic has accelerated digital transformation, leading to increasing numbers of organizations viewing the public cloud as the ideal platform to digitize their business processes. In fact, in its April 2021 forecast, research firm Gartner® projected end-user spending on public cloud services to grow 23.1 percent to a total of US$332.3 billion worldwide.

That growth is no surprise. The public cloud is an ideal environment for many enterprise applications and platforms, including SAP S/4HANA. It provides an attractive range of benefits including flexibility, scalability, ROI, and more.

  • No capital expenditures

    There’s no need to purchase and upgrade servers, switches, power supplies, and other gear, or pay for real estate to house your equipment when you deploy SAP S/4HANA to public cloud services. You pay your cloud provider only based on the resources you consume. Another advantage is the ability to take advantage of running your platforms on the latest hardware and software without having to pay for regular upgrades.

  • No ongoing maintenance

    On-premises infrastructure requires you to maintain your own hardware. If a server or switch goes down, you’re responsible for fixing it, which takes time and operating expenses. In a public cloud environment, since your staff won’t be as busy with ongoing maintenance and troubleshooting, they’ll have more time to focus on strategic projects that deliver real value to the organization.

  • Scalability

    The public cloud provides access to on-demand resources. If you need more storage or processing power, you can spin it up with literally the click of a button. In a more traditional environment, if you need to scale up your infrastructure, you have to invest in more hardware and software. Dealing with procurement, setting up servers, and installing new software takes time and money. And if you buy too much hardware, it will sit unused, wasting valuable budget. Buying too little hardware is even worse – potentially leading to slowdowns or outages. With public cloud, you can ramp your resources up or down based solely on your needs. Instantly. No need to wait.

  • Flexibility

    Deploying SAP S/4HANA in the public cloud enables employees, suppliers, and customers to get the data they need easily, from any location. And there’s no need to be on-site to maintain your infrastructure. Access controls backed by continuous monitoring ensure only the right people have permission to get to the right resources.

  • Reliability

    Unlike an on-premises environment, you don’t have to worry about having a secure off-site data center in case your primary infrastructure goes down. The public cloud has built-in fault tolerance. If one system fails, redundant resources kick in automatically to ensure your environment is always available. Large public cloud providers operate dozens of interconnected, highly available data centers around the world, so even if one data center goes offline, your operations won’t be impacted.

Just as important, many public cloud providers enable you to automate security tasks and encrypt your data, ensuring your important business information remains safe. Public cloud providers also have large teams with the latest security certifications to support their offerings – a huge benefit, given the effort and high cost associated with hiring trained IT security experts.

Transitioning to SAP S/4HANA in the public cloud helps foster growth and innovation, regardless of which hyperscaler you choose. But when making the move, be sure to choose a provider that enables you to do it quickly and reliably, with little disruption to your business processes. One that provides real scalability, and leverages automation to reduce administrative effort. And one that minimizes risks, while ensuring you continue to meet your business goals.

If you’re looking for tips and insights on how to make a smooth and seamless move to the public cloud, this informative whitepaper is a great place to start:

4-Step Roadmap: Transition to SAP S/4HANA in the Public Cloud

How SUSE and Intel® Help Enterprises Achieve High Availability When Migrating to SAP S/4 HANA

Wednesday, 2 March, 2022

Migrating to SAP S/4 HANA – Background

The migration from your SAP legacy system to S/4 HANA brings a lot of benefits due to its tight integration with the HANA database.  However, it also brings its challenges, particularly as you implement your availability requirements with the new application infrastructure stack.

Certainly, the SAP system features built-in intelligent technologies including AI, machine learning, and advanced analytics. And yes, it helps enterprises adopt new business models, manage business change at speed, orchestrate internal and external resources, and use the predictive power of AI.

But a challenge of migrating to SAP S/4HANA is maintaining high availability—during the move and afterwards. However, achieving high availability is more than a possibility because of the many ways SUSE and Intel® collaborate to help enterprises along their migration paths.

SUSE provides many ways to avoid downtime and achieve non-stop IT

Automated system fail-over and recovery is one way SUSE helps enterprises achieve high availability, with automated system fail-over and recovery. In any SAP S/4HANA migration, the more steps that can be automated, the more likely it is that downtime will be minimized.

SUSE Linux Enterprise Live Patching helps organizations implement multiple SUSE Linux Enterprise Server kernel fixes on the fly — without interruption, without a reboot for up to a year, and with zero downtime. Both these solutions are included with SUSE Linux Enterprise Server for SAP Applications subscriptions.

SUSE Linux Enterprise Server High Availability Extension improves service uptime. This integrated suite of open-source clustering technologies helps organizations maintain business continuity, protect data integrity, and reduce unplanned downtime for mission-critical Linux workloads. It enables enterprises to implement highly available physical and virtual Linux clusters, and eliminate single points of failure, ensuring the high availability and manageability of critical network resources including data, applications, and services.

SUSE Linux Enterprise Server High Availability Extension ships with essential monitoring, messaging, and cluster resource management functionality, supporting failover, failback, and migration load balancing of individually managed cluster resources. It is included with SUSE Linux Enterprise Server for SAP Applications subscriptions.

High availability takes the right mix of hardware and software technologies

Intel® scalable processors and Intel Optane persistent memory support high availability for SAP S/4HANA in physical and virtual Linux clusters. Intel® Xeon® scalable processors and Intel® OptaneTM persistent memory (PMem) enables customers to achieve larger scale-up configurations, since PMem carries a much lower price per gigabyte. The scale-up setup reduces the complexity of the environment and improves reliability. SAP HANA also uses Intel® OptaneTM PMem to persist data in memory. In the event of planned or unplanned downtime, the restart time of an SAP HANA system is significantly reduced. Combined with SUSE Linux Enterprise Server Live Patching, your Availability and Resiliency is greatly improved.

In general Intel® OptaneTM PMem accelerates any reboot, and delivers a 75% faster recovery than on an all DRAM system. That means a reboot takes 35 minutes with OptaneTM PMem for a 7TB DB vs. 150 minutes with an all DRAM system.

SUSE Linux Enterprise utilize Intel Reliability, Availability, and Serviceability (RAS) technologies for increased resiliency such as:

  • Intel® Run Sure Technology: New enhancements deliver advanced Reliability, Availability, and Serviceability (RAS) and server uptime for a company’s most critical workloads like SAP HANA. Hardware-assisted capabilities, including enhanced MCA and recovery and adaptive multidevice error correction, diagnose, and recover from previously fatal errors. And, they help ensure data integrity within the memory subsystem.
  • Intel® Key Protection Technology (Intel® KPT) and Intel® Platform Trust Technology (Intel® PTT): Deliver hardware-enhanced platform security by providing efficient key and data protection at rest, in-use, and in-flight.
  • Intel® Trusted Execution Technology (Intel® TXT): Enhanced platform security, while providing simplified and scalable deployment for Intel® Trusted Execution Technology (Intel® TXT).

SUSE and Intel® collaborate to deliver high availability for SAP S/4HANA

Migrating to SAP S/4HANA, on-premises or in the cloud, is one thing. Keeping SAP services running optimally once migrated is another. Enterprises require a broad set of high availability scenarios to avoid downtime and achieve non-stop IT for their SAP services.

That’s one of the reasons both SUSE and Intel® have engineers located on-site at SAP headquarters in Walldorf, Germany. They work with SAP engineers to ensure that both SUSE and Intel® Technologies are optimized and integrated into SAP S/4HANA on all supported Intel Xeon based server and cloud instances.

SUSE and Intel® are collaborating to ensure enterprises maintain high availability during and after their migrations to SAP S/4HANA.

For more information visit: www.suse.com/intel

Germany’s Federal Employment Agency partners with SUSE to drive digital innovation in the new world of work

Monday, 28 February, 2022

“Even though the migration has only just been completed, it is already clear that we have chosen the right platform for the future. SUSE Rancher and SUSE Linux Enterprise Server are the perfect combination for our requirements.” Frank Bayer Senior Architect for Operating Systems and Container Services IT System House, Federal Employment Agency.

Bundesagentur für Arbeit (BA) – Germany’s Federal Employment Agency – consists of 95,000 employees who facilitate jobs and training across Germany via 800 offices. At the height of the global pandemic BA was able to withstand intense pressure and deliver vital services that preserved jobs and secured livelihoods, dealing with upwards of 1 million telephone calls a day.

As BA’s IT goals shifted from efficiency to agility, the team’s focus turned to container technologies. Monolithic applications were to be replaced by flexibly deployable microservices and new application architecture intended to drastically shorten update cycles.

BA decided to make a strategic switch from its previous container solution to Kubernetes and a suitable management platform. Six platforms were assessed in detail, evaluating functionality, cost, market relevance, data protection, security and complexity.

The final choice was SUSE Rancher, which triumphed based on it being the most advanced and comprehensive tool for managing multiple Kubernetes clusters, especially in an environment with high security requirements. BA’s decision was sealed by the positive experience they’d had to date with SUSE Linux Enterprise Server, which had been a vital part of their digital transformation efforts to date.

Managing a multicluster architecture securely and efficiently

In BA’s large and complex container landscape, SUSE Rancher was able to play to its strengths right from the start with a quick and easy deployment phase. The solution presented entirely new possibilities for BA in terms of setting up and managing a multicluster architecture.

BA is now able to centrally monitor and manage all clusters. This minimizes the operational effort by up to 70% and patches can be applied automatically to all clusters in the environment.

Trouble shooting is also simplified as is user authentication and access control which can be managed centrally across all clusters with SUSE Rancher.

The new environment also met BA’s increased security requirements, allowing the IT team to control access to sensitive systems.

Time to market for updates and new services has improved by a factor of 8. This includes new applications, such as chatbot tools that help website users search for information and apply for benefits.

All of this comes with 60% lower costs of ownership and the added benefit of a single source of enterprise grade support.

Click here to find out more about how BA has delivered business critical digital transformation at scale, with a strategic operating system, highly available IT services and agile microservices.

Stupid Simple Kubernetes: Service Mesh

Wednesday, 16 February, 2022

We covered the what, when and why of Service Mesh in a previous post. Now I’d like to talk about why they are critical in Kubernetes. 

To understand the importance of using service meshes when working with microservices-based applications, let’s start with a story.  

Suppose that you are working on a big microservices-based banking application, where any mistake can have serious impacts. One day the development team receives a feature request to add a rating functionality to the application. The solution is obvious: create a new microservice that can handle user ratings. Now comes the hard part. The team must come up with a reasonable time estimate to add this new service.  

The team estimates that the rating system can be finished in 4 sprints. The manager is angry. He cannot understand why it is so hard to add a simple rating functionality to the app.  

To understand the estimate, let’s understand what we need to do in order to have a functional rating microservice. The CRUD (Create, Read, Update, Delete) part is easy — just simple coding. But adding this new project to our microservices-based application is not trivial. First, we have to implement authentication and authorization, then we need some kind of tracing to understand what is happening in our application. Because the network is not reliable (unstable connections can result in data loss), we have to think about solutions for retries, circuit breakers, timeouts, etc.  

We also need to think about deployment strategies. Maybe we want to use shadow deployments to test our code in production without impacting the users. Maybe we want to add A/B testing capabilities or canary deployments. So even if we create just a simple microservice, there are lots of cross-cutting concerns that we have to keep in mind.  

Sometimes it is much easier to add new functionality to an existing service than create a new service and add it to our infrastructure. It can take a lot of time to deploy a new service, add authentication and authorization, configure tracing, create CI/CD pipelines, implement retry mechanisms and more. But adding the new feature to an existing service will make the service too big. It will also break the rule of single responsibility, and like many existing microservices projects, it will be transformed into a set of connected macroservices or monoliths. 

We call this the cross-cutting concerns burden — the fact that in each microservice you must reimplement the cross-cutting concerns, such as authentication, authorization, retry mechanisms and rate limiting. 

What is the solution to this burden? Is there a way to implement all these concerns once and inject them into every microservice, so the development team can focus on producing business value? The answer is Istio.  

Set Up a Service Mesh in Kubernetes Using Istio  

Istio solves these issues using sidecars, which it automatically injects into your pods. Your services won’t communicate directly with each other — they’ll communicate through sidecars. The sidecars will handle all the cross-cutting concerns. You define the rules once, and these rules will be injected automatically into all of your pods.   

Sample Application 

Let’s put this idea into practice. We’ll build a sample application to explain the basic functionalities and structure of Istio.  

In the previous post, we created a service mesh by hand, using envoy proxies. In this tutorial, we will use the same services, but we will configure our Service Mesh using Istio and Kubernetes.  

The image below depicts that application architecture.  

 

  1. Kubernetes(we used the 1.21.3 version in this tutorial) 
  1. Helm (we used the v2) 
  1. Istio (we used 1.1.17) - setup tutorial 
  1. Minikube, K3s or Kubernetes cluster enabled in Docker 

Git Repository 

My Stupid Simple Service Mesh in Kubernetes repository contains all the scripts for this tutorial. Based on these scripts you can configure any project. 

Running Our Microservices-Based Project Using Istio and Kubernetes 

As I mentioned above, step one is to configure Istio to inject the sidecars into each of your pods from a namespace. We will use the default namespace. This can be done using the following command: 

kubectl label namespace default istio-injection=enabled 

In the second step, we navigate into the /kubernetes folder from the downloaded repository, and we apply the configuration files for our services: 

kubectl apply -f service1.yaml 
kubectl apply -f service2.yaml 
kubectl apply -f service3.yaml 

After these steps, we will have the green part up and running: 

 

For now, we can’t access our services from the browser. In the next step, we will configure the Istio Ingress and Gateway, allowing traffic from the exterior. 

The gateway configuration is as follows: 

apiVersion: networking.istio.io/v1alpha3 
kind: Gateway 
metadata:   
    name: http-gateway 
spec: 
    selector:  
        istio: ingressgateway 
    servers: 
        - port: 
            number: 80 
            name: http 
            protocol: HTTP 
        hosts:    - “*”  

Using the selector istio: ingressgateway, we specify that we would like to use the default ingress gateway controller, which was automatically added when we installed Istio. As you can see, the gateway allows traffic on port 80, but it doesn’t know where to route the requests. To define the routes, we need a so-called VirtualService, which is another custom Kubernetes resource defined by Istio. 

apiVersion: networking.istio.io/v1b 
kind: VirtualService 
metadata: 
    name: sssm-virtual-services 
spec: 
    hosts:  - "*" 
    gateways:  - http-gateway 
    http:   
        - match: 
            - uri: 
                prefix: /service1 
            route: 
                - destination: 
                    host: service1 
                    port: 
                        number: 80 
        - match: 
            - uri: 
                prefix: /service2 
            route: 
                - destination: 
                    host: service2 
                    port: 
                        number: 80 

The code above shows an example configuration for the VirtualService. In line 7, we specified that the virtual service applies to the requests coming from the gateway called http-gateway and from line 8 we define the rules to match the services where the requests should be sent. Every request with /service1 will be routed to the service1 container while every request with /service2 will be routed to the service2 container. 

At this step, we have a working application. Until now there is nothing special about Istio — you can get the same architecture with a simple Kubernetes Ingress controller, without the burden of sidecars and gateway configuration.  

Now let’s see what we can do using Istio rules. 

Security in Istio 

Without Istio, every microservice must implement authentication and authorization. Istio removes the responsibility of adding authentication and authorization from the main container (so developers can focus on providing business value) and moves these responsibilities into its sidecars. The sidecars can be configured to request the access token at each call, making sure that only authenticated requests can reach our services. 

apiVersion: authentication.istio.io/v1beta1 
kind: Policy 
metadata: 
    name: auth-policy 
spec:   
    targets:   
        - name: service1   
        - name: service2   
        - name: service3  
        - name: service4   
        - name: service5   
    origins:  
    - jwt:       
        issuer: "{YOUR_DOMAIN}"      
        jwksUri: "{YOUR_JWT_URI}"   
    principalBinding: USE_ORIGIN 

As an identity and access management server, you can use Auth0, Okta or other OAuth providers. You can learn more about authentication and authorization using Auth0 with Istio in this article. 

Traffic Management Using Destination Rules 

Istio’s official documentation says that the DestinationRule “defines policies that apply to traffic intended for a service after routing has occurred.” This means that the DestionationRule resource is situated somewhere between the Ingress controller and our services. Using DestinationRules, we can define policies for load balancing, rate limiting or even outlier detection to detect unhealthy hosts.  

Shadowing 

Shadowing, also called Mirroring, is useful when you want to test your changes in production silently, without affecting end users. All the requests sent to the main service are mirrored (a copy of the request) to the secondary service that you want to test. 

Shadowing is easily achieved by defining a destination rule using subsets and a virtual service defining the mirroring route.  

The destination rule will be defined as follows: 

apiVersion: networking.istio.io/v1beta1 
kind: DestinationRule 
metadata:   
    name: service2 
spec:   
    host: service2 
    subsets:   
    - name: v1      
      labels:       
          version: v1 
    - name: v2     
      labels:       
          version: v2 

As we can see above, we defined two subsets for the two versions.  

Now we define the virtual service with mirroring configuration, like in the script below: 

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata:   
    name: service2 
spec:   
    hosts:     
    - service2   
    http:   
    - route:     
        - destination:         
          host: service2 
          subset: v1            
        mirror:       
            host: service2 
            subset: v2 

In this virtual service, we defined the main destination route for service2 version v1. The mirroring service will be the same service, but with the v2 version tag. This way the end user will interact with the v1 service, while the request will also be sent also to the v2 service for testing. 

Traffic Splitting 

Traffic splitting is a technique used to test your new version of a service by letting only a small part (a subset) of users to interact with the new service. This way, if there is a bug in the new service, only a small subset of end users will be affected.  

This can be achieved by modifying our virtual service as follows: 

apiVersion: networking.istio.io/v1alpha3 
kind: VirtualService 
metadata:   
    name: service2 
spec:   
    hosts:     
    - service2  
    http:   
    - route:     
        - destination:         
              host: service2         
              subset: v1       
         weight: 90            
         - destination:         
               host: service2 
               subset: v2       
         weight: 10    

The most important part of the script is the weight tag, which defines the percentage of the requests that will reach that specific service instance. In our case, 90 percent of the request will go to the v1 service, while only 10 percent of the requests will go to v2 service. 

Canary Deployments 

In canary deployments, newer versions of services are incrementally rolled out to users to minimize the risk and impact of any bugs introduced by the newer version. 

This can be achieved by gradually decreasing the weight of the old version while increasing the weight of the new version. 

A/B Testing 

This technique is used when we have two or more different user interfaces and we would like to test which one offers a better user experience. We deploy all the different versions and we collect metrics about the user interaction. A/B testing can be configured using a load balancer based on consistent hashing or by using subsets. 

In the first approach, we define the load balancer like in the following script: 

apiVersion: networking.istio.io/v1alpha3 
kind: DestinationRule 
metadata:   
    name: service2 
spec:   
    host: service2 
    trafficPolicy:     
        loadBalancer:       
            consistentHash:         
                httpHeaderName: version 

As you can see, the consistent hashing is based on the version tag, so this tag must be added to our service called “service2”, like this (in the repository you will find two files called service2_v1 and service2_v2 for the two different versions that we use): 

apiVersion: apps/v1 
kind: Deployment 
metadata:   
    name: service2-v2   
    labels:     
        app: service2 
spec:   
    selector:     
        matchLabels:       
            app: service2   
    strategy:     
        type: Recreate   
    template:     
        metadata:      
            labels:         
                app: service2         
                version: v2     
        spec:       
            containers:       
            - image: zoliczako/sssm-service2:1.0.0         
              imagePullPolicy: Always         
              name: service2         
              ports:           
              - containerPort: 5002         
              resources:           
                  limits:             
                      memory: "256Mi"             
                      cpu: "500m" 

The most important part to notice is the spec -> template -> metadata -> version: v2. The other service has the version: v1 tag. 

The other solution is based on subsets. 

Retry Management 

Using Istio, we can easily define the maximum number of attempts to connect to a service if the initial attempt fails (for example, in case of overloaded service or network error). 

The retry strategy can be defined by adding the following lines to the end of our virtual service: 

retries:   
    attempts: 5 
    perTryTimeout: 10s 

With this configuration, our service2 will have five retry attempts in case of failure and it will wait 10 seconds before returning a timeout. 

Learn more about traffic management in this article. You’ll find a great workshop to configure an end-to-end service mesh using Istio here. 

Conclusion 

In this chapter, we learned how to set up and configure a service mesh in Kubernetes using Istio. First, we configured an ingress controller and gateway and then we learned about traffic management using destination rules and virtual services.  

Want to Learn More from our Stupid Simple Series?

Read our eBook: Stupid Simple Kubernetes. Download it here!

Scale Your Infrastructure with Cloud Native Technology

Wednesday, 16 February, 2022

When business is growing rapidly, the necessity to scale the processes is obvious. If your initial infrastructure hasn’t been thought through with scalability in mind, growing your infrastructure may be quite painful. The common tactic, in this case, is to transition to cloud native architecture.

In this post, we will talk about what you need to know when you’re scaling up with the cloud so that you can weigh the pros and cons and make an informed decision. 

What is Cloud Technology?

Cloud computing is the on-demand delivery of IT resources—applications, storage, databases, networking and more—over the Internet (“the cloud”). It has quickly become popular because it allows enterprises to expand without extra work to manage their resources. Cloud services providers can provide you with as much storage space as you need, regardless of how big the organization is. Cloud native computing is a programming approach that is designed to take advantage of the cloud computing model. It uses open source software that supports its three key elements: Containerization, orchestration of the containers and microservices.

Why Do You Need the Cloud in Your Organization? 

In 2021, 94% of companies used cloud technology in some capacity. This huge popularity can be attributed to several reasons:

Convenience

As we’ve already mentioned, scalability is one of the main advantages that make businesses transition to this model. With on-premise storage, you have to purchase new equipment, set up servers and even expand your team in the case of drastic growth. But with the cloud, you only need to click a couple of buttons to expand your cloud storage size and make a payment, which is, of course, much simpler.

Flexibility

Cloud native architecture makes your company more flexible and responsive to the needs of both clients and employees. Your employees can enjoy the freedom of working from any place on their own devices. Having a collaborative space is rated by both business owners and employees as very important. 

Being able to access and edit files in the cloud easily is also crucial when working with clients. Your company and clients can build an efficient working relationship regardless of the geographic location.

Cost

Data that companies need to store accumulates quickly, fueled by new types of workloads. However, your costs can’t grow at the same pace.

Cloud services allow you to spend more responsibly; necessary IT resources can be rented for as much time as you need and easily canceled. Companies that work in industries facing sharp seasonal increases in the load on information systems especially benefit from the cloud.

Types of Cloud Native Solutions

Cloud native solutions is an umbrella term for different services. You can choose the model that works best for you. 

Platform as a Service (PaaS)

Platform as a service is a cloud environment that contains everything you need to support the full lifecycle of cloud applications. You avoid the complexities and costs associated with hardware and software setup.

Infrastructure as a Service (IaaS)

Infrastructure as a service enables companies to rent servers and data centers instead of building up their own from zero. You get an all-inclusive solution so that you can start scaling your business processes in no time. However, the implementation of IaaS can result in a large overhead.

Software as a Service (SaaS)

In this model, your applications run on remote computers “in the cloud.” These servers are owned and maintained by other companies. The connection between these computers and users’ computers happens via the internet, usually using a Web browser.

Cloud Deployment Models: Public vs. Private

Cloud comes in many types that you can use based on your business needs: public cloud, private cloud, hybrid cloud, and multi-cloud. Let’s find out which one fits your organization.

Public Cloud

Public clouds are run by companies that offer fast access to low-cost computing resources over the public network. With public cloud services, users do not need to purchase hardware, software, and underlying infrastructure—in other words, the service provider decides.

Private Cloud

A private cloud is an infrastructure for one organization only, managed internally or by third parties, and located on or off the organization’s premises. Private clouds can take advantage of public cloud environments and at the same time ensure greater control over resources and avoid the problems associated with working on a collective lease.

Hybrid Cloud

In a hybrid cloud, a private cloud is used as the foundation, combined with strategic integration and public cloud services. Most companies with private clouds will eventually move to workload management across multiple data centers, private clouds, and public clouds — that is, they will move to hybrid clouds.

Multi-Cloud

Many organizations adopt various cloud services to drive innovation and increase business agility, including generating new revenue streams, adding products and services, and increasing profits. With its wide range of potential benefits, multi-cloud environments are essential to the survival and success of the digital era.

Cloud Services as Business Tools

Some companies need the cloud more than others. Industries that can greatly benefit from cloud adoption are retail, insurance, and hospitality. 

Using cloud resources, companies in these industries organize backup data processing centers (RDCs) and ensure the necessary infrastructure for creating and debugging applications, storing archives, etc.

However, any company can benefit from cloud adoption, especially if your employees work collaboratively with documents, files, and other types of content. Small and medium-sized businesses are increasingly interested in platform services, such as cloud database management systems, and large companies organize information storage from disparate sources in the cloud.

How to Make Transformation Painless

Before you transform your processes:

-Start with the education of your team.

-Talk to your teammates about how moving to the cloud will help them perform daily tasks more easily. Your colleagues might not immediately understand that cloud solutions provide better collaboration or higher security options.

-Ensure that they have the necessary resources to explore and learn about new tools.

Any cloud service providers such as Amazon provide coaching. Depending on the resources, you can hire new team members that already have the necessary competencies to facilitate the transition. Just remember that to be painless, cloud migration should happen in an organized and step-by-step way.

There can be quite a few options for cloud migration. At first, you can migrate only part of your workload to the cloud while combining it with the on-premises approach. 

Cloud Transformation Stages

Now let’s talk a bit more about cloud transformation stages. They may differ based on the company’s needs and can be carried out independently or with the involvement of external experts for consultations. 

Developing a Migration Strategy

The first step to a successful migration to the cloud is to develop a business plan where you define the needs of your business, set up goals, and agree on technical aspects. Usually, you perform one or more brainstorming sessions with your internal team and then perfect the model you have with your third-party consultants or service provider. You need to decide which type of cloud product you prefer and choose your deployment method.

Auditing the Company’s Existing IT Infrastructure

To add details to your cloud adoption strategy, you need to audit the company’s infrastructure. Application rationalization is the process of going through all the applications used in the company to determine which to keep and which to let go of. Most companies are doing just that before any efforts to move to the cloud. During this stage, you identify the current bottlenecks that should be solved with the adoption of cloud native architecture. 

Drawing a Migration Roadmap

Together with your team or service provider, you develop a migration roadmap. It should contain the main milestones; for example, it can describe by what time different departments of your company should migrate to the cloud. You might connect with several cloud services providers to negotiate the best conditions for yourself at this stage. 

Migration

Migration to the cloud can take up to several months. However, after migration, you and your employees will transition where you adapt to the new work environment.

Optimization

Difficulties (including technical ones) can arise at every stage. Any migration involves some downtime; that needs to be planned so that the business is not harmed. Often there are problems associated with non-standard infrastructure, or there is a need to implement additional solutions. During the optimization stage, you identify the problems that need to be fixed and develop a defined strategy.

Cloud migration can seem like a tedious process at first. But the benefits that it provides to businesses are worth it. If you choose a cloud product based on your business needs that prepare a long-lasting implementation strategy and dedicate enough time to audit and optimization, you will be pleasantly surprised with the transformation of your processes.

Summing up

Many companies are now transitioning to cloud native technology to scale their infrastructure because it’s more flexible, convenient, and allows cost reduction. Your team can choose from different types of cloud depending on your priorities, whether it be on-premise cloud or IaaS.

Cloud native technology transformation will help you scale your infrastructure and expand your business globally. If you are searching for ways to make your company more flexible to meet both the needs of your employees and your clients, cloud migration might be the best choice for you. 

Join the Conversation!

What’s your cloud transformation story? Join the SUSE & Rancher Community where you’ll find resources to support you in your cloud native journey — from introductory and advanced courses and like-minded peers to offer support.

Kubernetes cost management with Kubecost and SUSE Rancher

Wednesday, 16 February, 2022

SUSE GUEST BLOG ARTICLE AUTHORED BY: Alex Thilen, Head of Business Development, Kubecost

 

Kubernetes and containerized workloads have become a de facto standard of the modern IT landscape, delivering unprecedented agility – on-premises, in the cloud, and across clouds. Managing resource costs in this dynamic environment can be challenging for organizations of any size. We’ve invited Kubecost, a SUSE One partner, to share some highlights of its approach and capabilities that enable SUSE Rancher customers to better manage their Kubernetes infrastructure costs. ~ Terry

 

A modern approach to infrastructure cost governance with Kubecost and SUSE Rancher

Over the last five years, we have witnessed an enormous number of companies across all industries migrate to Kubernetes and to the cloud native toolchain.  Adoption of the platform continues to accelerate.  According to a 2021 SlashData™ report with the Cloud Native Computing Foundation, 5.6 million developers use Kubernetes today, representing a 63% increase over the previous year.  More than 70% of Fortune 100 companies now use Kubernetes, and every day new companies are running Kubernetes in production. 

Along with advantages of scale and resilience, this ecosystem brings new layers of complexity. Managing infrastructure costs in this dynamic environment can be challenging due to lack of visibility in Kubernetes project costs. Many organizations struggle to understand where their cloud spend is going and how to improve it.  Engineering and infrastructure teams all too often end up with a big bill at the end of the month—and occasionally a bit of heat from their colleagues in finance, too. 

Kubecost delivers unified, real-time, and accurate cost visibility and governance to SUSE Rancher users. 

 

SUSE | Kubecost Solution Stack for Kubernetes Cost Management

Kubecost enables organizations with granular, real-time cost reporting across resources, departments, teams, and projects, making Kubernetes spend observable and trackable. Kubecost capabilities include: 

  • Unified Cost Monitoring
    Gain unified cost visibility through the Kubecost UI or API endpoint to learn how much each team, application, and environment has consumed. 
  • Cost Allocation
    Break down costs by any Kubernetes concept, including deployment, service, namespace label, and more for accurate “showbacks” and “chargebacks.”  
  • Optimization Insights
    Get dynamic recommendations for reducing spend across your SUSE Rancher landscape without sacrificing performance.  
  • Alerting & Reporting
    Get detailed reports and real-time alerting to quickly catch cost overruns and outage risks and to accelerate troubleshooting using granular data across clusters and clouds.  
  • Governance & Compliance
    Continuously manage cloud posture and compliance with out-of-the-box policies for PCI, NIST, SOC2, etc. 

 

Kubecost and SUSE Rancher are both open source and cross-platform, leveraging the innovative power of many and maximizing flexibility. And workload data never needs to leave the Kubernetes cluster environment, enabling organizations to minimize security and data governance risks.

 

You can get started for free.  Easily install Kubecost from the SUSE Rancher Apps & Marketplace or install manually with a few simple steps. 

 

Kubecost - SUSE Rancher architecture for cost management 

Take control of your cloud native environment with SUSE Rancher and Kubecost. 

 

Did you miss our SUSE One Partner Solutions Showcase webinar, “Monitor and Reduce K8s Costs with Kubecost and SUSE Rancher”?  You can access it on-demand. 

 

Additional resources:

 

Alex Thilen, Head of Business Development, KubecostAlex Thilen is the Head of Business Development for Kubecost.  He is a former Category Lead at AWS, where he led the Containers & Kubernetes Category for AWS Marketplace. He lives in Seattle.

 

 

 


SUSE One Partner Solution Stacks, like Kubecost with SUSE Rancher, are featured innovations with SUSE and partner components.  We developed the SUSE One Partner Solution Stacks framework to make it even easier for SUSE and partners to collaborate on solutions that are designed to arm organizations with capabilities and agility to overcome challenges and accelerate success. 

The framework is open to all SUSE One Partner specializations and tiers.  We welcome opportunities to collaborate on new solutions or to enhance existing solutions with new capabilities.  Log into the SUSE One Partner Portal or speak with your SUSE One alliance manager for more information.