Stupid Simple Service Mesh: What, When, Why

Monday, 18 April, 2022

Recently microservices-based applications became very popular and with the rise of microservices, the concept of Service Mesh also became a very hot topic. Unfortunately, there are only a few articles about this concept and most of them are hard to digest.

In this blog, we will try to demystify the concept of Service Mesh using “Stupid Simple” explanations, diagrams, and examples to make this concept more transparent and accessible for everyone. In the first article, we will talk about the basic building blocks of a Service Mesh and we will implement a sample application to have a practical example of each theoretical concept. In the next articles, based on this sample app, we will touch more advanced topics, like Service Mesh in Kubernetes, and we will talk about some more advanced Service Mesh implementations like IstioLinkerd, etc.

To understand the concept of Service Mesh, the first step is to understand what problems it solves and how it solves them.

Software architecture has evolved a lot in a short time, from classical monolithic architecture to microservices. Although many praise microservice architecture as the holy grail of software development, it introduces some serious challenges.

Overview of the sample application

For one, a microservices-based architecture means that we have a distributed system. Every distributed system has challenges such as transparencysecurityscalabilitytroubleshooting, and identifying the root cause of issues. In a monolithic system, we can find the root cause of a failure by tracing. But in a microservice-based system, each service can be written in different languages, so tracing is no trivial task. Another challenge is service-to-service communication. Instead of focusing on business logic, developers need to take care of service discoveryhandle connection errorsdetect latencyretry logic, etc. Applying SOLID principles on the architecture level means that these kinds of network problems should be abstracted away and not mixed with the business logic. This is why we need Service Mesh.

Ingress Controller vs. API Gateway vs. Service Mesh

As I mentioned above, we need to apply SOLID principles on an architectural level. For this, it is important to set the boundaries between Ingress Controller, API Gateway, and Service Mesh and understand each one’s role and responsibility.

On a stupid simple and oversimplified level, these are the responsibilities of each concept:

  1. Ingress Controller: allows a single IP port to access all services from the cluster, so its main responsibilities are path mapping, routing and simple load balancing, like a reverse proxy
  2. API Gatewayaggregates and abstracts away APIs; other responsibilities are rate-limiting, authentication, and security, tracing, etc. In a microservices-based application, you need a way to distribute the requests to different services, gather the responses from multiple/all microservices, and then prepare the final response to be sent to the caller. This is what an API Gateway is meant to do. It is responsible for client-to-service communication, north-south traffic.
  3. Service Mesh: responsible for service-to-service communication, east-west traffic. We’ll dig more into the concept of Service Mesh in the next section.

Service Mesh and API Gateway have overlapping functionalities, such as rate-limiting, security, service discovery, tracing, etc. but they work on different levels and solve different problems. Service Mesh is responsible for the flow of requests between services. API Gateway is responsible for the flow of requests between the client and the services, aggregating multiple services and creating and sending the final response to the client.

The main responsibility of an API gateway is to accept traffic from outside your network and distribute it internally, while the main responsibility of a service mesh is to route and manage traffic within your network. They are complementary concepts and a well-defined microservices-based system should combine them to ensure application uptime and resiliency while ensuring that your applications are easily consumable.

What Does a Service Mesh Solve?

As an oversimplified and stupid simple definition, a Service Mesh is an abstraction layer hiding away and separating networking-related logic from business logic. This way developers can focus only on implementing business logic. We implement this abstraction using a proxy, which sits in the front of the service. It takes care of all the network-related problems. This allows the service to focus on what is really important: the business logic. In a microservice-based architecture, we have multiple services and each service has a proxy. Together, these proxies are called Service Mesh.

As best practices suggest, proxy and service should be in separate containers, so each container has a single responsibility. In the world of Kubernetes, the container of the proxy is implemented as a sidecar. This means that each service has a sidecar containing the proxy. A single Pod will contain two containers: the service and the sidecar. Another implementation is to use one proxy for multiple pods. In this case, the proxy can be implemented as a Deamonset. The most common solution is using sidecars. Personally, I prefer sidecars over Deamonsets, because they keep the logic of the proxy as simple as possible.

There are multiple Service Mesh solutions, including IstioLinkerdConsulKong, and Cilium. (We will talk about these solutions in a later article.) Let’s focus on the basics and understand the concept of Service Mesh, starting with Envoy. This is a high-performance proxy and not a complete framework or solution for Service Meshes (in this tutorial, we will build our own Service Mesh solution). Some of the Service Mesh solutions use Envoy in the background (like Istio), so before starting with these higher-level solutions, it’s a good idea to understand the low-level functioning.

Understanding Envoy

Ingress and Egress

Simple definitions:

  • Any traffic sent to the server (service) is called ingress.
  • Any traffic sent from the server (service) is called egress.

The Ingress and the Egress rules should be added to the configuration of the Envoy proxy, so the sidecar will take care of these. This means that any traffic to the service will first go to the Envoy sidecar. Then the Envoy proxy redirects the traffic to the real service. Vice-versa, any traffic from this service will go to the Envoy proxy first and Envoy resolves the destination service using Service Discovery. By intercepting the inbound and outbound traffic, Envoy can implement service discovery, circuit breaker, rate limiting, etc.

The Structure of an Envoy Proxy Configuration File

Every Envoy configuration file has the following components:

  1. Listeners: where we configure the IP and the Port number that the Envoy proxy listens to
  2. Routes: the received request will be routed to a cluster based on rules. For example, we can have path matching rules and prefix rewrite rules to select the service that should handle a request for a specific path/subdomain. Actually, the route is just another type of filter, which is mandatory. Otherwise, the proxy doesn’t know where to route our request.
  3. Filters: Filters can be chained and are used to enforce different rules, such as rate-limiting, route mutation, manipulation of the requests, etc.
  4. Clusters: act as a manager for a group of logically similar services (the cluster has similar responsibility as a service in Kubernetes; it defines the way a service can be accessed), and acts as a load balancer between the services.
  5. Service/Host: the concrete service that handles and responds to the request

Here is an example of an Envoy configuration file:

---
admin:  
  access_log_path: "/tmp/admin_access.log"  
  address:     
    socket_address: 
      address: "127.0.0.1" 
      port_value: 9901
static_resources:   
  listeners:    
   -       
      name: "http_listener"      
      address:         
        socket_address:           
          address: "0.0.0.0"          
          port_value: 80      
      filter_chains:          
        filters:             
          -               
            name: "envoy.http_connection_manager"              
            config:                
              stat_prefix: "ingress"                
              codec_type: "AUTO"               
              generate_request_id: true                
              route_config:                   
                name: "local_route"                  
                virtual_hosts:                    
                  -                       
                    name: "http-route"                      
                    domains:                         
                      - "*"                      
                    routes:                       
                      -                           
                        match:                             
                          prefix: "/nestjs"                          
                        route:                            
                          prefix_rewrite: "/"                            
                          cluster: "nestjs"                        
                      -                           
                        match:                             
                            prefix: "/nodejs"                          
                          route:                            
                            prefix_rewrite: "/"                            
                            cluster: "nodejs"                         
                       -                           
                         match:                             
                           path: "/"                          
                         route:                            
                           cluster: "base"                
              http_filters:                  
                -                     
                  name: "envoy.router"                    
                  config: {}  

  clusters:    
    -       
      name: "base"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -           
          socket_address:             
            address: "service_1_envoy"            
            port_value: 8786        
        -           
          socket_address:             
            address: "service_2_envoy"            
            port_value: 8789        
    -      
      name: "nodejs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_4_envoy"            
            port_value: 8792        
    -      
      name: "nestjs"      
      connect_timeout: "0.25s"      
      type: "strict_dns"      
      lb_policy: "ROUND_ROBIN"      
      hosts:        
        -          
          socket_address:            
            address: "service_5_envoy"            
            port_value: 8793

The configuration file above translates into the following diagram:

This diagram did not include all configuration files for all the services, but it is enough to understand the basics. You can find this code in my Stupid Simple Service Mesh repository.

As you can see, between lines 10-15 we defined the Listener for our Envoy proxy. Because we are working in Docker, the host is 0.0.0.0.

After configuring the listener, between lines 15-52 we define the Filters. For simplicity we used only the basic filters, to match the routes and to rewrite the target routes. In this case, if the subdomain is “host:port/nodeJs,” the router will choose the nodejs cluster and the URL will be rewritten to “host:port/” (this way the request for the concrete service won’t contain the /nodesJs part). The logic is the same also in the case of “host:port/nestJs”. If we don’t have a subdomain in the request, then the request will be routed to the cluster called base without prefix rewrite filter.

Between lines 53-89 we defined the clusters. The base cluster will have two services and the chosen load balancing strategy is round-robin. Other available strategies can be found here. The other two clusters (nodejs and nestjs) are simple, with only a single service.

The complete code for this tutorial can be found in my Stupid Simple Service Mesh git repository.

Conclusion

In this article, we learned about the basic concepts of Service Mesh. In the first part, we understood the responsibilities and differences between the Ingress Controller, API Gateway, and Service Mesh. Then we talked about what Service Mesh is and what problems it solves. In the second part, we introduced Envoy, a performant and popular proxy, which we used to build our Service Mesh example. We learned about the different parts of the Envoy configuration files and created a Service Mesh with five example services and a front-facing edge proxy.

In the next article, we will look at how to use Service Mesh with Kubernetes and will create an example project that can be used as a starting point in any project using microservices.

There is another ongoing “Stupid Simple AI” series. The first two articles can be found here: SVM and Kernel SVM and KNN in Python.

Want to Learn More from our Stupid Simple Series?

Read our eBook: Stupid Simple Kubernetes. Download it here!

How to migrate to SAP applications in the cloud

Thursday, 14 April, 2022

Increasing numbers of organizations are turning to the cloud to support their SAP applications. The cloud appeals to them for several reasons — it has built-in fault tolerance; it scales easily with demand; and it reduces capital expenditures by offloading the responsibility for hardware support onto the cloud provider.

However, moving to the cloud isn’t as simple as just pressing a button. Your SAP HANA and SAP S/4HANA application suites are sophisticated tools that require seamless support. SUSE Linux for SAP Applications is the leading Linux for SAP HANA, SAP S/4HANA, and SAP NetWeaver applications in the cloud. In fact, it is endorsed by SAP and used by more than 30,000 customers. And with over 20 years of experience working very closely with SAP, SUSE delivers solutions that enable you to shift your workloads to the cloud with ease and reliability.

Here are five ways, SUSE simplifies your path to SAP in the cloud:

  • Automated deployment
  • Flexible management
  • High availability
  • Comprehensive monitoring
  • Completely open source

Automated Deployment

The long history of co-innovation between SUSE and SAP has resulted in development of tools and methods that will ease your journey to the cloud, cutting deployment time from days or weeks, down to hours.

SUSE provides pre-built templates to deploy your SAP infrastructure to any of the hyperscale cloud providers – AWS, Google Cloud, or Microsoft Azure. SUSE supports Salt and Terraform open-source automation frameworks, which enables error-free deployment of the full SAP system stack. SUSE also collaborates with Alibaba, IBM, Oracle and other cloud service providers to build and support their own templates using SUSE Linux Enterprise Server (SLES) for SAP Applications, giving you even more options.

All the template processes follow SAP deployment best practices, and are repeatable. If deployment complications happen to arise after running a template, you can easily tweak a value in the template and run it over again.

Flexible Management

A flexible, versatile management system is critical to keeping your SAP systems running smoothly in the cloud. That’s what makes SUSE Manager so important. It enables you to manage your complete lifecycle – configuration, automation, and updates – all from a single interface. Just like the other components of the SUSE portfolio, SUSE Manager has SAP knowledge built-in, allowing it to work flawlessly with any SAP platforms.

If you’re planning to transition from on-premises to the cloud, SUSE Manager is designed to work in a hybrid environment, so you can manage your on-premises and cloud resources from a single interface.

High Availability

High Availability (HA) is a crucial element of the SAP environment for virtually any organization to ensure continuous uptime. SUSE is an innovator and thought leader in High Availability, with years of experience solving real-life customer challenges.

SAP HA solutions developed by SUSE include the HANA System Replication Scale-up and the HANA System Replication Scale-Out. The choice of these and many other scenarios enable you to customize your SAP environment to meet your organization’s unique needs.

SUSE has also developed HA solutions for SAP NetWeaver and SAP S/4HANA, including the sap_suse_cluster_connector, which simplifies SAP maintenance processes in an HA cluster. This cluster connector software serves as a certification reference for any Unix/Linux cluster vendor.

Comprehensive Monitoring

The SUSE monitoring framework provides comprehensive dashboards that offer deep insight into your SAP environment, addressing all three layers of the core SAP infrastructure – SAP applications, the cluster stack, and the operating systems.

Through a single console, you can monitor the SAP HANA database, SAP HANA System Replication, SAP S/4HANA applications, and SAP NetWeaver applications, as well as the infrastructure. Graphical warnings and reporting make it easy to spot any potential issues before they occur. Plus, the monitoring tools can be used to check Pacemaker cluster resources and generate integrated insights.

Completely Open Source

Some vendors say they support open source, but at the same time they adopt strategies that can lead to vendor lock-in. SUSE is an independent vendor with no competing priorities, committed to a fully open-source posture. SUSE supports multiple cloud platforms, hypervisors, container technologies, and management solutions. SUSE knows the cloud is all about flexibility, and wants to ensure its customers benefit from having a full range of choices.

This commitment to open source and flexibility ensures customers can maximize bargaining power. And if you need to switch environments, you can bring your SUSE solutions with you.

Other vendors may focus on their IT skills and offer to support SAP in the cloud. But SUSE truly knows how critical your SAP applications are to your business. So, when it comes to migrating to the cloud, it simply makes sense to rely on an expert like SUSE that is well-known for its close, long-term co-innovation relationship with SAP, and strong Cloud Service Provider partnerships.

To learn more about SUSE solutions for SAP infrastructures in the public cloud, visit www.suse.com/cloud-for-sap.

SUSE Linux Enterprise Micro 5.2 is Generally Available

Thursday, 14 April, 2022

Today, we are proud to announce the release of SUSE Linux Enterprise Micro 5.2 – a lightweight and secure operating system built for containerized and virtualized workloads.

SUSE Linux Enterprise Micro

 

What’s new?

This latest release of SLE Micro is a stabilization and consolidation release geared towards improving usability and reliability.

  • Introducing self-install image to further reduce deployment time. Self-install image is a bootable pre-configured image that simply installs onto the target system and then uses the same configuration methods of the already existing pre-configured images. Self-install image improves deployment process by removing manual step in deploying SLE Micro.
  • Additional cockpit modules for web-based administration enhance the usability for web-based configuration and administration.

What are the use cases of SLE Micro?

SLE Micro can be used as a single-node container host, Kubernetes cluster node, single-node KVM virtualization host or in public cloud. Since it is built to scale, customers can incorporate SLE Micro into their digital transformation plans – whether at the edge or supporting edge deployments with mainframes – in a way that allows them to transition workload designs from monolithic to microservices, at their own pace. They can start with container workloads or virtualize their current legacy workloads, then move to containerized workloads when they are ready, with no change in the underlying system platform.

Our customers highly appreciate the low maintenance aspect of SLE Micro as it helps them reduce costs while modernizing their infrastructure. SLE Micro provides an ultra-reliable infrastructure platform that is also simple to use and comes out-of-the-box with best-in-class compliance.

SLE Micro is also an integral part of our Edge solution for customers. Here is a summary of benefits derived by some of our customers and partners.

Manufacturing

By transforming physical servers on factory floor into Edge devices connected to cloud, Krones reduced number of servers by 50%, using SLE Micro for OS and K3s for Kubernetes.

“With a decentralized approach, we are reducing operational expenses and modernizing application infrastructure by moving applications running on bare metal to a fully managed containerized stack using K3S and SLE Micro.” Ottmar Amann, Software Systems, Corporate R&D, KRONES AG

Telecom

By diversifying supply chain, one of the world’s largest telecom operators reduced TCO (both hardware and software) for their Edge cloud project – using K3s for Kubernetes, SLE Micro at OS layer, SUSE Manager and Rancher Management for full lifecycle management.

Embedded Systems

A large U.S.-based systems integrator is using SLE Micro to reduce maintenance costs and modernize their embedded systems by supporting container workloads on an immutable infrastructure that is easy to maintain and update.

Mainframe Servers

SLE Micro, with its small footprint, built-in security framework and near-zero administrative overhead, provides an excellent container and virtualization host for IBM Z & LinuxONE.

“We expect our joint customers will appreciate being able to take advantage of this immutable Linux distribution as a KVM host in their secure execution stack, taking advantage of the security and reliability the IBM Z platform provides.” Kara Todd, director of Linux, IBM Z and LinuxONE, IBM.

Arm based systems

“With the combination of SLE Micro and K3s, SUSE is providing an excellent platform for Arm-based embedded devices, edge use cases and industrial IoT applications.” Bhumik Patel, director of server ecosystem development, Infrastructure Line of Business, Arm.

 

SLE Micro, teamed with other SUSE technologies, aims to be the foundation of container workloads deployed in all areas of production – edge environments, embedded, industrial IoT, and a variety of compute environments inside or outside the data center.

Explore and try SLE Micro from here.

 Learn more

Roadmap to the Cloud. Join SUSE at SAP Sapphire The Hague

Wednesday, 13 April, 2022

SUSE is a proud gold sponsor of SAP Sapphire in The Hague, on May 17th 2022

For over 20 years, SAP and SUSE have delivered innovative business-critical solutions on open source platforms, enabling organizations to improve operations, anticipate requirements, and become industry leaders. Our ability to deliver both innovation and stability guarantees a strong bond of trust between SAP, SUSE, and our joint customers going forward. Today, many SAP customers run their SAP and SAP S/4HANA environments on SUSE. SUSE is an SAP platinum partner offering the following Endorsed App to SAP software: SUSE Linux Enterprise Server for SAP applications.

Meet the SUSE team

One of the main topics of this SAPPHIRE will be Public Cloud. Organizations are migrating SAP S/4HANA to the public cloud to enable faster business growth, higher productivity, and new avenues for innovation.

Take advantage of one-on-one time with SUSE SAP expert Alan Clarke and subject matter experts to share your needs and learn how we can help.

Don’t hesitate to visit the SUSE booth.

SUSE enables you to rapidly deploy and scale mission-critical SAP applications on your choice of hyperscalers with high availability and reduced complexity. If you want to learn more about how you accelerate your cloud vision, visit www.suse.com/cloud-for-sap

SUSE Linux Enterprise Server for SAP applications is endorsed by SAP

The idea behind Endorsed Apps is to make it super easy for SAP customers to get up and running with SAP. It helps to easily identify the top-rated partners and apps that are verified to deliver outstanding value. These solutions are tested and premium certified by SAP with added security, in-depth testing, and measurements against benchmark results.
Find more information on the SAP Store

Contact Us

If you have any additional questions, please don’t hesitate to contact us at sapalliance@suse.com

We look forward to seeing you at SAP Sapphire The Hague (Netherlands) on May 17, 2022.

Accelerate Your Cloud Vision. JOIN SUSE at SAP Sapphire Orlando

Monday, 11 April, 2022

SUSE is a proud platinum sponsor of SAP Sapphire & ASUG Accelerate Orlando: May 10–12, 2022 (Booth PA215)

For over 20 years, SAP and SUSE have delivered innovative business-critical solutions on open source platforms, enabling organizations to improve operations, anticipate requirements, and become industry leaders. Our ability to deliver both innovation and stability guarantees a strong bond of trust between SAP, SUSE, and our joint customers going forward.  Today, many SAP customers run their SAP and SAP S/4HANA environments on SUSE. SUSE is an SAP platinum partner offering the following Endorsed App to SAP software: SUSE Linux Enterprise Server for SAP applications.

 

Efficiently transition to SAP S/4HANA in the public cloud

One of the main topics of this SAPPHIRE will be Public Cloud. Organizations are migrating SAP S/4HANA to the public cloud to enable faster business growth, higher productivity, and new avenues for innovation. Don’t miss the SUSE customer presentation on this topic

Session ERP211: Hear About Walgreens Boots Alliance’s Journey to the Cloud

Customer Success Story (20 mins)

Tue 02:30 p.m. – 02:50 p.m.

SUSE enables you to rapidly deploy and scale mission-critical SAP applications on your choice of hyperscalers with high availability and reduced complexity. If you want to learn more about how you accelerate your cloud vision, visit www.suse.com/cloud-for-sap

Meet the SUSE team

Take advantage of one-on-one time with SUSE experts and subject matter experts to share your needs and learn how we can help. Don’t hesitate to visit the SUSE booth (PA219).

 

We are also very pleased to have the our executive at Sapphire Orlando

Markus NogaGeneral Manager of Business-critical Linux

Thomas Di GiacomoChief Technology & Product Officer

Jochen GlaserGeneral Manager SAP

If you would like to meet them, please do not hesitate to request a meeting: sapalliance@suse.com

Meetings are first-come, first-served, so please book early to secure your time slot.

If you do not have a ticket for Sapphire Orlando yet, you can register here

SUSE Linux Enterprise Server for SAP applications is endorsed by SAP

The idea behind Endorsed Apps is to make it super easy for SAP customers to get up and running with SAP. It helps to easily identify the top-rated partners and apps that are verified to deliver outstanding value. These solutions are tested and premium certified by SAP with added security, in-depth testing, and measurements against benchmark results.

Find more information on the SAP Store

Contact Us

If you have any additional questions, please don’t hesitate to contact us: at sapalliance@suse.com

We look forward to seeing you at SAP Sapphire & ASUG Accelerate Orlando: on May 10–12, 2022.

 

Solidify your containerisation strategy with SoftIron and SUSE Rancher

Thursday, 7 April, 2022

SoftIron recently announced that it has partnered with SUSE to provide integration support for SoftIron’s HyperDrive storage appliances (purpose-built to deliver optimal Ceph performance) using HyperDrive Storage Plugin for SUSE Rancher. Read on the guest blog authored by Craig to find out why this is a big win for those working with containers and Ceph! ~Vince

 

SUSE guest blog authored by:

Craig Chadwell, VP of Product at SoftIron

Use the power of open source to deliver enterprise-class container management with reliable unified storage

We’re all for great collaborations at SoftIron, and this latest one is a big win for those working with containers and Ceph! We’ve worked closely with SUSE Rancher to mobilise integration support for HyperDrive storage appliances, by creating the HyperDrive Storage Plugin for SUSE Rancher. 

Download our new SoftIron + SUSE Rancher solution brief to learn how a fully integrated, highly scalable data storage and container management system can help your organisation deliver excellent enterprise class solutions. 

Accessibility is key

The SoftIron plugin is easily installed via SUSE Rancher Apps & Marketplace, allowing developers to integrate and manage HyperDrive capabilities via the SUSE Rancher container management platform. This way, you enjoy the flexibility of Ceph’s object, block, and file storage protocols in a single unified storage system. This can be achieved without being an expert in Ceph – HyperDrive makes daily cluster administration tasks simpler, with HyperDrive Storage Manager. 

Streamline your Kubernetes cluster management with SUSE Rancher

If your Kubernetes cluster is in desperate need of some housekeeping, let SUSE Rancher lead the way with its centralised management interface. Leverage the platform’s unification capabilities and you’ll see improvements in operational efficiency, workload management and security. The secure container environment also simplifies interactions between Development and IT Ops teams, making collaboration quicker and easier.

With Gartner projecting that 70% of organisations globally will have more than two containerised applications running in their environments, a seamless experience makes all the difference for efficient daily cluster operations. And SUSE Rancher offers increased flexibility to support hybrid or multi-cloud environments, broadening the potential for complete customisation and digital innovation.

Achieve interoperability with HyperDrive

Our HyperDrive storage appliances don’t just play nicely with SUSE Rancher – they’re engineered from the ground up for full interoperability with Ceph. This interoperability doesn’t just make daily Ceph cluster administration tasks easier; HyperDrive’s task-specific nature also minimises I/O bottlenecks and power requirements, which can’t be matched by storage solutions using commercial off-the-shelf (COTS) options for Ceph. 

With SoftIron’s approach, hardware is optimised according to the needs of the software, delivering unparalleled levels of control, efficiency and transparency – all without vendor lock-in.

 

For an in-depth look into what the SoftIron and SUSE Rancher partnership entails, read the solution brief.

And to discover how an open source, task-specific storage solution can take your enterprise storage to the next level, take HyperDrive for a test drive.

To learn more about how SoftIron and SUSE can help you use the power of open source to deliver enterprise-class container management with reliable unified storage, visit our website for more information or get in touch with the SoftIron team. You could also contact your SUSE Rancher sales representative. We look forward to talking to you!

 

Author:

Craig Chadwell is the VP of Product at SoftIron. He has spent over a decade engineering, marketing, and leading product management of cloud and software-defined data center solutions. Craig has held positions at Lenovo, NetApp, and High Point University where he gained first-hand buyer and administration experience across the lifecycle of data center operations. Craig has degrees in computer science, history, political science, and business administration.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Reinforcing Open Source Security with SUSE and the new IBM z16

Tuesday, 5 April, 2022

Big Data Center

If the last two years have taught us anything, they’ve taught CIOs how to be resilient.  Resiliency comes in the form of being agile, adaptable, and the right security.  And the ability to thrive in unforeseen circumstances.

As CIOs dive deeper into resiliency, they must also answer these questions:

  • Are my systems secure?
  • Am I able to support and manage a hybrid cloud environment?
  • Can I quickly adapt to new technologies to be competitive?

Fortunately, if you are running SUSE Software Solutions in your environment the answer to these questions is a resounding yes.

SUSE Software Solutions runs on a variety of hardware, including the new IBM z16  As a proud IBM Business Partner, we believe that the latest IBM z16 and the SUSE Linux Enterprise Server  provides the perfect combination of security and future-proofing.

“The new IBM z16 is a quantum-safe system using lattice-based cryptography, which offers on-chip AI inferencing at scale, and provides a scalable, reliable, security-rich infrastructure for hybrid cloud. ” said Kara Todd, Director of Linux, IBM zSystems and LinuxONE. “It’s great to see SUSE’s continued support and exploitation of the strengths of the IBM zSystems platform.”

Let’s take a closer look.

Security at the Core

The new IBM z16 platform proactively protects your business against current and future threats.  But that is at the hardware layer.  Is your software undermining or supporting your hardware?

SUSE Linux Enterprise is secure by design.  Starting with our world-class Secure Software Supply Chain, SUSE Linux Enterprise Solutions have security built in.  It is the only Linux OS to achieve the highest level of security and cryptographic certification. And, our “certify once, use many” approach means that the SLE market-leading security posture is inherited by all its derivatives, including SLE Micro and SLE BCI (Base Container Images).  These certifications matter.  This level of security ensures that platforms adhere to criteria and standards set forth by governments and regulated industries.

With quantum-safe cryptography supported on IBM z16 combined with the market-leading security of SUSE Linux Enterprise Server, you can check security off your worry list.

The Future is Hybrid

Not everything can be moved to the cloud.  That’s why IBM built  the z16 platform to accelerate modernization.  That is, you can easily integrate it into your hybrid cloud environment.  But what about your software?  Don’t you want a common code base for your software so your admins are not running one solution on premises and something different in the cloud?

Fortunately, we have you covered.  SUSE Linux Enterprise Server is an adaptable OS and our “common code base” platform bridges traditional and software-defined infrastructure.  SUSE simplifies workload migration, protects your traditional infrastructure, and eases the adoption of containers – regardless of where your workloads are running.

The combination of IBM z16 with SUSE Linux Enterprise Server means that you can seamlessly transition when, where and how you need.

Future Proofing by Design

It seems like AI is prevalent everywhere – from retail stores to machine floors. That is why the IBM z16’s new on-chip Integrated Accelerator for AI supports insights at speed and scale to accelerate decision velocity across the business.

SUSE knows that as enterprises find more ways to put AI to work, the need for the right infrastructure grows. In addition to the right hardware, these enterprises need a strong Linux platform to speed up AI applications as they turn high volumes of data into business value. That is, we design SUSE Linux Enterprise Server so enterprises can continually align data as AI/ML apps become smarter and more sophisticated  to solve critical problems.  This is how businesses get a competitive edge.

SUSE is excited about the launch of this new IBM z16.  We know running SUSE Linux Enterprise Server on the new IBM z16 can be key to helping CIOs be resilient.

Learn More!

We invite you to learn more about SUSE Linux Enterprise Solutions here and the new IBM z16 here.

 

BTP Sextant and SUSE Rancher Deliver Enterprise-grade Blockchain Solution

Wednesday, 30 March, 2022

When distributed ledger technology burst onto the scene with blockchain implementations for cryptocurrency, the technological breakthrough of an immutable, multiparty ledger – a ledger that’s permanent and tamperproof – was quickly proven.

However, aiming distributed ledger technology at business transformation and digitizing multiparty workflows and agreements needed a wider ecosystem of technologists, developers and tools to mature in order to simplify implementations, unlock the inherent value and fully realize the potential of the technology.

SUSE One Silver Partner, BTP, is an enterprise blockchain company with a mission to bring the benefits of distributed ledgers, smart contracts, and information security to enterprises and we’ve invited BTP to author a guest blog so you can discover how they’re unleashing the technology for building multiparty, efficient, trustworthy, distributed enterprise applications. ~Bret

SUSE guest blog authored by:
Csilla Zsigri, VP Strategy at BTP

BTP Sextant and SUSE Rancher

Marketplaces are transforming the way businesses provide and consume technology. They have become an integral part of a broader go-to-market strategy for technology vendors and a go-to place for technology buyers. The SUSE Rancher Marketplace delivers this experience for cloud-native and open-source enterprise software.

We – at BTP – have made the community edition of our Kubernetes-native, blockchain management and operations platform, Sextant, available via the SUSE Rancher Marketplace. 

Our flagship Sextant offering radically simplifies the deployment and management of distributed ledgers as well as smart contract runtime environments, which in turn allows businesses to focus on building multiparty applications, as well as capturing business value, rather than having to worry about the underlying technology infrastructure.

Installing the Sextant Community Edition on a SUSE Rancher managed Kubernetes cluster is very straightforward. One of our software engineers, Alex Marshall, has created a cookbook recipe to walk you through all the steps, so you can try this for yourself.

Getting Started

This is our cookbook recipe for installing the Sextant Community Edition on a SUSE Rancher managed Kubernetes cluster, to deploy and manage blockchain networks.

License

Use of the Sextant Community Edition is governed by our Marketplace EULA, with the exception of Daml support, which is subject to our Evaluation EULA.

Useful Links

Prerequisites

To install the Sextant Community Edition, you will need to obtain user credentials from BTP. If you don’t have these already, you can request them here.

You will also need the following:

  • SUSE Rancher v2.6 or later with a Kubernetes cluster v1.19 or later
  • kubectl configured to access your cluster

Install Sextant

Log in to Rancher and select the cluster you want to install Sextant on. In our example, this will be the local Rancher cluster:

From the left menu, select Apps & Marketplace and then Charts. Choose the Sextant chart from the list of partner charts:This will take you to the following screen:

Here, you will need to specify the namespace and name for your Sextant installation. In our example, we will use sextant in both cases. Note that if the namespace doesn’t exist, the installation process will automatically create this for you.

Make sure you have your BTP supplied credentials ready. As noted above, you can request these here. Then, click the Next button on the bottom right of the page:

On this screen you can configure your Sextant installation. On the left hand side, you will find three options:

  • User Credentials – The only required fields are the Username and Password credentials that you obtained from BTP. These are entered here.
  • Ingress Settings – If you’d like to enable an ingress for Sextant, you can specify this here. This is optional.
  • Database Settings – If you’d like to use an external Postgres database, you can specify this here. This is also optional.

Enter your user credentials in the form, and then click the Install button on the bottom right of the page.

Rancher will now install Sextant on your local cluster. It may take a few minutes for the Sextant images to be pulled down to your cluster from our private repo:

Once the installation has completed, you will see the NOTES from the installation. In our example, these are:

NOTES:
 1. Get the initial Sextant application username and password by running this command:
 kubectl describe pod/sextant-0 --namespace sextant | grep INITIAL_
 2. Get the application URL by running these commands:
 export POD_NAME=$(kubectl get pods -l
 "app.kubernetes.io/name=sextant" -o
 jsonpath="{.items[0].metadata.name}")
 echo "Visit http://127.0.0.1:8080 to use your application"
 kubectl port-forward $POD_NAME 8080:80

Make a note of these instructions, as we will now switch to a local terminal window to finish setting up Sextant.

Once you’ve opened a local terminal, start by confirming that you can connect to your Kubernetes cluster using kubectl by running this command:
kubectl get pods
Then, run the first command from the installation NOTES. In our example, this is:
kubectl describe pod/sextant-0 --namespace sextant | grep INITIAL_
This will display the initial username/password combination for your Sextant installation.

Make sure you save this combination, as it will not be possible to retrieve it if the Sextant deployment is restarted.

Now, run the second command from the installation NOTES. In our example, this is:
export POD_NAME=$(kubectl get pods -l
"app.kubernetes.io/name=sextant" -o
jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80

This will set up a port forward to your Sextant installation, and make it accessible on your local machine:

Switch back to your browser and open the URL shown in the terminal output. In our example, this is http://127.0.0.1:8080. This will load the Sextant UI where you can log in using the initial username/password retrieved earlier:

At this point, you are all set to start using Sextant to deploy and manage blockchain networks. The first thing you will need to do is add a cluster to Sextant. Detailed instructions on how to do this can be found here.

Notes

  • These instructions are also available online here.
  • Assuming that your local cluster has at least four nodes, you can add this to Sextant and use it to deploy a four node distributed ledger network.
  • To access all Sextant’s features, you can also apply for a Sextant Enterprise Edition evaluation here.

Conclusion

Overcoming shortages in IT skills and resources is a key challenge associated with digital transformation, and distributed ledger technology (DLT) is no exception. The DLT space is a complex one.

BTP Sextant paired with SUSE Rancher provides a solid and easy-to-use technology foundation for building multiparty applications.

Csilla is VP Strategy at enterprise blockchain company BTP. Previously, Csilla was a technology industry analyst at market research and advisory firm 451 Research, part of S&P Global Market Intelligence, where she covered the distributed ledger technology market, among other areas.

Introducing SUSE Premium Technical Advisory Services

Wednesday, 30 March, 2022

Technical Advice, Counsel and Guidance to Keep You Competitive

The skills gap is real and hiring is expensive and time consuming. You need access to a specialist to keep your business running smoothly and stay on top of technology trends. Premium Technical Advisory Services is just that. With an assigned coordinator, you can schedule time with the right specialist at the right time. So whether you need technical expertise, mentorship or guidance, Premium Technical Advisory Services provides just the right amount of service. Premium Technical Advisory Services can be the difference between surviving and thriving in today’s digital world.

Premium Technical Advisory Services at a Glance:

Premium Technical Advisory Services is a 12-month, fixed-cost tiered offering . Having Premium Technical Advisory Services in place means having access to the right professionals at the right time who can help you:

• Maintain your business… By ensuring that your SUSE solutions are running optimally and securely on the most current releases with the latest guidance and advice to meet your current business objectives.
• Grow your business… By giving you guidance on technology trends, security insights and performance best practices to address your ever-changing business needs.
• Innovate your business… By exploring cloud native technologies like Kubernetes and Containers to move you ahead of your competition.

Note: SUSE Premium Technical Advisory Services is an annual subscription. All benefits reset at the end of 12 months.

What does Premium Technical Advisory Services Offer?

With a range of benefits, Premium Technical Advisory Services provides the services you need to keep your business competitive and your team innovating. Benefits include:

Dedicated Coordinator

Premium Technical Advisory Services provides direct access to a SUSE Services Coordinator who will work with you to schedule the right person or team for your exact concerns. They will understand your needs and get to know you and your business. Imagine one call to gain access to the specific skills you need to address your unique concerns.

On-site Days

Sometimes a phone call or a web meeting won’t do; sometimes your need technical expertise at your location. The top two tiers (Professional and Enterprise) provide that access. On-site days provide a unique opportunity for collaboration and knowledge transfer. Working on-site also enables your technical expert to solve immediate and potential future problems, lower IT costs and heighten productivity quickly and efficiently.

Direct Technical Professionals

Infrastructure can be complex and ever changing. What are containers and why do you need them? Is your high availability system set up correctly? How can you securely and simply manage your mixed Linux environment? Get answers to these and more with direct access to SUSE experts who can address your specific needs on your time frame.

Defined Schedules

Work directly with your coordinator to schedule time with your advisor to meet the needs and timescales of you and your team. Depending on the tier chosen, your lead time for a scheduled date will range from almost immediately to no more than 3 days. Your coordinator makes the schedule painless and Premium Technical Advisory Services gives you the freedom to address your needs with the right team.