How to find the SAP S/4HANA service provider that best suits your needs

Wednesday, 5 October, 2022

Increasing numbers of enterprises are making the move to SAP S/4HANA. But the Gartner® report, “Critical Capabilities for SAP S/4HANA Application Services, Worldwide,” notes “The cost of implementation, together with resource attrition and talent shortages, presents challenges for clients as they engage providers for S/4HANA seeking assessment, migration, transformation, and support services.”

As a result, procurement professionals need to carefully assess the capabilities of SAP S/4HANA service providers to make sure they choose one with the skills and expertise to meet the specific needs of the organization. Selecting a provider that has strengths complementing those of your team can go a long way to determining whether implementation will achieve its objectives.

Looking beyond the technical capability

The technical capability of service providers matters, but outcomes are the most important. Ideally, Gartner says, you should “consider providers with intellectual property (IP) such as tools, accelerators, and industry solutions that will drive your desired outcomes.”

Solid SAP S/4HANA skills are a key requirement for any service provider, but it is critical to look at other capabilities that can help ensure a successful implementation. These include change management, process optimization, and specializations in relevant industry verticals.

Consideration should also be given to providers who are willing to sign contracts based on outcomes, Gartner says, and you should “potentially leverage commercial models that involve provider revenue at risk.”

Finding the right fit

Each major SAP S/4HANA service provider has a different set of capabilities. Gartner has extensively researched 20 service providers who meet the criteria for Gartner’s “Magic Quadrant for SAP S/4HANA Services, Worldwide”, and assessed how each one performs in relevant use cases. Its Critical Capabilities report covers four key use cases:

  • Assessment and Planning Services – In the Gartner report, assessment and planning services include both business process and technical impact assessments, as well as consulting, proofs of concept, a roadmap, and business case development. These can apply to either a greenfield SAP S/4HANA installation or migration from a legacy platform to SAP S/4HANA.
  • Business Transformation Services – These services rely on SAP S/4HANA to transform business functions and processes, with the goal of creating better business value. Capabilities assessed in the report include technology enablement, industry, and process expertise, and organizational change and management.
  • Application Management and Evolution Services – This includes ongoing management services for SAP S/4HANA applications as part of a multi-year contract. Services covered include application development, implementation, integration, testing, maintenance and support, application monitoring, job scheduling, backup and restoration, web services, and database activities.
  • Technical Migration Services – This encompasses project-based SAP S/4HANA deployments without a transformation focus. Services involved include database migration, adapting data models to the SAP S/4HANA data model, and replacing program code.

The Gartner report helps guide you to the best-suited service provider

Transitioning to an SAP S/4HANA environment is a complex undertaking. It has the potential to deliver greater efficiency and flexibility, but only if the implementation is handled correctly. Finding a service provider with the right expertise to complement your internal team will help ensure your SAP S/4HANA project goes as smoothly as possible.

Read the full Gartner report for detailed insights to determine the assessment, migration, transformation, and support services capabilities of 20 SAP S/4HANA service providers.

Download: “Critical Capabilities for SAP S/4HANA Application Services, Worldwide”.

Gartner, Critical Capabilities for SAP S/4HANA Application Services, Worldwide,
By Analysts: Peter Adamo, Fabio Di Capua, Luis Pinto, Jaideep Thyagarajan, Allan Wilkins ,
27 June 2022

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner’s research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Meet Epinio: The Application Development Engine for Kubernetes

Tuesday, 4 October, 2022

Epinio is a Kubernetes-powered application development engine. Adding Epinio to your cluster creates your own platform-as-a-service (PaaS) solution in which you can deploy apps without setting up infrastructure yourself.

Epinio abstracts away the complexity of Kubernetes so you can get back to writing code. Apps are launched by pushing their source directly to the platform, eliminating complex CD pipelines and Kubernetes YAML files. You move directly to a live instance of your system that’s accessible at a URL.

This tutorial will show you how to install Epinio and deploy a simple application.

Prerequisites

You’ll need an existing Kubernetes cluster to use Epinio. You can start a local cluster with a tool like K3sminikubeRancher Desktop or with any managed service such as Azure Kubernetes Service (AKS) or Google Kubernetes Engine (GKE).

You must have the following tools to follow along with this guide:

Install them from the links above if they’re missing from your system. You don’t need these to use Epinio, but they are required for the initial installation procedure.

The steps in this guide have been tested with K3s v1.24 (Kubernetes v1.24) and minikube v1.26 (Kubernetes v1.24) on a Linux host. Additional steps may be required to run Epinio in other environments.

What Is Epinio?

Epinio is an application platform that offers a simplified development experience by using Kubernetes to automatically build and deploy your apps. It’s like having your own PaaS solution that runs in a Kubernetes cluster you can control.

Using Epinio to run your apps lets you focus on the logic of your business functions instead of tediously configuring containers and Kubernetes objects. Epinio will automatically work out which programming languages you use, build an appropriate image with a Paketo Buildpack and launch your containers inside your Kubernetes cluster. You can optionally use your own image if you’ve already got one available.

Developer experience (DX) is a hot topic because good tools reduce stress, improve productivity and encourage engineers to concentrate on their strengths without being distracted by low-level components. A simpler app deployment experience frees up developers to work on impactful changes. It also promotes experimentation by allowing new app instances to be rapidly launched in staging and test environments.

Epinio Tames Developer Workflows

Epinio is purpose-built to enhance development workflows by handling deployment for you. It’s quick to set up, simple to use and suitable for all environments from your own laptop to your production cloud. New apps can be deployed by running a single command, removing the hours of work required if you were to construct container images and deployment pipelines from scratch.

While Epinio does a lot of work for you, it’s also flexible in how apps run. You’re not locked into the platform, unlike other PaaS solutions. Because Epinio runs within your own Kubernetes cluster, operators can interact directly with Kubernetes to monitor running apps, optimize cluster performance and act on problems. Epinio is a developer-oriented layer that imbues Kubernetes with greater ease of use.

The platform is compatible with most Kubernetes environments. It’s edge-friendly and capable of running with 2 vCPUs and 4 GB of RAM. Epinio currently supports Kubernetes versions 1.20 to 1.23 and is tested with K3s, k3d, minikube and Rancher Desktop.

How Does Epinio Work?

Epinio wraps several Kubernetes components in higher-level abstractions that allow you to push code straight to the platform. Your Epinio installation inspects your source, selects an appropriate buildpack and creates Kubernetes objects to deploy your app.

The deployment process is fully automated and handled entirely by Epinio. You don’t need to understand containers or Kubernetes to launch your app. Pushing up new code sets off a sequence of actions that allows you to access the project at a public URL.

Epinio first compresses your source and uploads the archive to a MinIO object storage server that runs in your cluster. It then “stages” your application by matching its components to a Paketo Buildpack. This process produces a container image that can be used with Kubernetes.

Once Epinio is installed in your cluster, you can interact with it using the CLI. Epinio also comes with a web UI for managing your applications.

Installing Epinio

Epinio is usually installed with its official Helm chart. This bundles everything needed to run the system, although there are still a few prerequisites.

Before deploying Epinio, you must have an ingress controller available in your cluster. NGINX and Traefik provide two popular options. Ingresses let you expose your applications using URLs instead of raw hostnames and ports. Epinio requires your apps to be deployed with a URL, so it won’t work without an ingress controller. New deployments automatically generate a URL, but you can manually assign one instead. Most popular single-node Kubernetes distributions such as K3s,minikube and Rancher Desktop come with one either built-in or as a bundled add-on.

You can manually install the Traefik ingress controller if you need to by running the following commands:

$ helm repo add traefik https://helm.traefik.io/traefik
$ helm repo update
$ helm install traefik --create-namespace --namespace traefik traefik/traefik

You can skip this step if you’re following along using minikube or K3s.

Preparing K3s

Epinio on K3s doesn’t have any special prerequisites. You’ll need to know your machine’s IP address, though—use it instead of 192.168.49.2 in the following examples.

Preparing minikube

Install the official minikube ingress add-on before you try to run Epinio:

$ minikube addons enable ingress

You should also double-check your minikube IP address with minikube ip:

$ minikube ip
192.168.49.2

Use this IP address instead of 192.168.49.2 in the following examples.

Installing Epinio on K3s or minikube

Epinio needs cert-manager so it can automatically acquire TLS certificates for your apps. You can install cert-manager using its own Helm chart:

$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install cert-manager --create-namespace --namespace cert-manager jetstack/cert-manager --set installCRDs=true

All other components are included with Epinio’s Helm chart. Before you continue, set up a domain to use with Epinio. It needs to be a wildcard where all subdomains resolve back to the IP address of your ingress controller or load balancer. You can use a service such as sslip.io to set up a magic domain that fulfills this requirement while running Epinio locally. sslip.io runs a DNS service that resolves to the IP address given in the hostname used for the query. For instance, any request to *.192.168.49.2.sslip.io will resolve to 192.168.49.2.

Next, run the following commands to add Epinio to your cluster. Change the value of global.domain if you’ve set up a real domain name:

$ helm repo add epinio https://epinio.github.io/helm-charts
$ helm install epinio --create-namespace --namespace epinio epinio/epinio --set global.domain=192.168.49.2.sslip.io

You should get an output similar to the following. It provides information about the Helm chart deployment and some getting started instructions from Epinio.

NAME: epinio
LAST DEPLOYED: Fri Aug 19 17:56:37 2022
NAMESPACE: epinio
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To interact with your Epinio installation download the latest epinio binary from https://github.com/epinio/epinio/releases/latest.

Login to the cluster with any of these:

    `epinio login -u admin https://epinio.192.168.49.2.sslip.io`
    `epinio login -u epinio https://epinio.192.168.49.2.sslip.io`

or go to the dashboard at: https://epinio.192.168.49.2.sslip.io

If you didn't specify a password, the default one is `password`.

For more information about Epinio, feel free to check out https://epinio.io/ and https://docs.epinio.io/.

Epinio is now installed and ready to use. If you hit a problem and Epinio doesn’t start, refer to the documentation to check any specific steps required for compatibility with your Kubernetes distribution.

Installing the CLI

Install the Epinio CLI from the project’s GitHub releases page. It’s available as a self-contained binary for Linux, Mac and Windows. Download the appropriate binary and move it into a location on your PATH:

$ wget https://github.com/epinio/epinio/releases/epinio-linux-x86_64
$ sudo mv epinio-linux-x86_64 /usr/local/bin/epinio
$ sudo chmod +x /usr/local/bin/epinio

Try running the epinio command:

$ Epinio Version: v1.1.0
Go Version: go1.18.3

Next, you can connect the CLI to the Epinio installation running in your cluster.

Connecting the CLI to Epinio

Login instructions are shown in the Helm output displayed after you install Epinio. The Epinio API server is exposed at epinio.<global.domain>. The default user credentials are admin and password. Run the following command in your terminal to connect your CLI to Epinio, assuming you used 192.168.49.2.sslip.io as your global domain:

$ epinio login -u admin https://epinio.192.168.49.2.sslip.io

You’ll be prompted to trust the fake certificate generated by your Kubernetes ingress controller if you’re using a magic domain without setting up SSL. Press the Y key at the prompt to continue:

Logging in to Epinio in the CLI

You should see a green Login successful message that confirms the CLI is ready to use.

Accessing the Web UI

The Epinio web UI is accessed by visiting your global domain in your browser. The login credentials match the CLI, defaulting to admin and password. You’ll see a browser certificate warning and a prompt to continue when you’re using an untrusted SSL certificate.

Epinio web UI

Once logged in, you can view your deployed applications, interactively create a new one using a form and manage templates for quickly launching new app instances. The UI replicates most of the functionality available in the CLI.

Creating a Simple App

Now you’re ready to start your first Epinio app from a directory containing your source. You don’t have to create a container image or run any external tools.

You can use the following Node.js code if you need something simple to deploy. Save it to a file called index.js inside a new directory. It runs an Express web server that responds to incoming HTTP requests with a simple message:

const express = require('express')
const app = express()
const port = 8080;

app.get('/', (req, res) => {
  res.send('This application is served by Epinio!')
})

app.listen(port, () => {
  console.log(`Epinio application is listening on port ${port}`)
});

Next, use npm to install Express as a dependency in your project:

$ npm install express

The Epinio CLI has a push command that deploys the contents of your working directory to your Kubernetes cluster. The only required argument is a name for your app.

$ epinio push -n epinio-demo

Press the Enter key at the prompt to confirm your deployment. Your terminal will fill with output as Epinio logs what’s happening behind the scenes. It first uploads your source to its internal MinIO object storage server, then acquires the right Paketo Buildpack to create your application’s container image. The final step adds the Kubernetes deployment, service and ingress resources to run the app.

Deploying an application with Epinio

Wait until you see the green App is online message appears in your terminal, and visit the displayed URL in your browser to see your live application:

App is online

If everything has worked correctly, you’ll see This application is served by Epinio! when using the source code provided above.

Application running in Epinio

Managing Deployed Apps

App updates are deployed by repeating the epinio push command:

$ epinio push -n epinio-demo

You can retrieve a list of deployed apps with the Epinio CLI:

$ epinio app list
Namespace: workspace

✔️  Epinio Applications:
|        NAME         |            CREATED            | STATUS |                     ROUTES                     | CONFIGURATIONS | STATUS DETAILS |
|---------------------|-------------------------------|--------|------------------------------------------------|----------------|----------------|
| epinio-demo         | 2022-08-23 19:26:38 +0100 BST | 1/1    | epinio-demo-a279f.192.168.49.2.sslip.io         |                |                |

The app logs command provides access to the logs written by your app’s standard output and error streams:

$ epinio app logs epinio-demo

🚢  Streaming application logs
Namespace: workspace
Application: epinio-demo
🕞  [repinio-demo-057d58004dbf05e7fb7516a0c911017766184db8-6d9fflt2w] repinio-demo-057d58004dbf05e7fb7516a0c911017766184db8 Epinio application is listening on port 8080

Scale your application with more instances using the app update command:

$ epinio app update epinio-demo --instances 3

You can delete an app with app delete. This will completely remove the deployment from your cluster, rendering it inaccessible. Epinio won’t touch the local source code on your machine.

$ epinio app delete epinio-demo

You can perform all these operations within the web UI as well.

Conclusion

Epinio makes application development in Kubernetes simple because you can go from code to a live URL in one step. Running a single command gives you a live deployment that runs in your own Kubernetes cluster. It lets developers launch applications without surmounting the Kubernetes learning curve, while operators can continue using their familiar management tools and processes.

Epinio can be used anywhere you’re working, whether on your own workstation or as a production environment in the cloud. Local setup is quick and easy with zero configuration, letting you concentrate on your code. The platform uses Paketo Buildpacks to discover your source, so it’s language and framework-agnostic.

Epinio is one of the many offerings from SUSE, which provides open source technologies for Linux, cloud computing and containers. Epinio is SUSE’s solution to support developers building apps on Kubernetes, sitting alongside products like Rancher Desktop that simplify Kubernetes cluster setup. Install and try Epinio in under five minutes so you can push app deployments straight from your source.

It’s here! SUSE Rancher, SLE Micro and K3s now available with HPE Pointnext support

Monday, 3 October, 2022

It’s here! SUSE Rancher, SLE Micro and K3s now available with HPE Pointnext support 

  • SUSE and HPE bring simplified management at the distributed edge to HPE GreenLake 

For twenty-five years, SUSE has collaborated and innovated with Hewlett Packard Enterprises (HPE), to create a seamless hardware and software experience. As part of this extended partnership we are proud to announce the availability of  SUSE Rancher, K3s and SUSE Linux Enterprise Micro (SLE Micro) with HPE Pointnext support bundled-in, starting October 3rd.  

HPE and SUSE’s latest solution 

HPE GreenLake is recognized as one of the leading distributed cloud platform delivering data from edge to cloud and is the most secure platform for IT to deliver a distributed cloud experience. SUSE Rancher, K3s and SUSE Linux Enterprise is SUSE’s industry-leading software stack which, when combined with HPE GreenLake, enables companies to optimize their operations from edge to cloud with workload and data lifecycle management at an enterprise scale. HPE has embraced SUSE Rancher management server and K3s as a valued and supported component of HPE GreenLake. 

What are SUSE Rancher, SLE Micro and K3s 

The container management market has seen accelerated growth over the last year and SUSE Rancher is a market leader in multi-cloud container development platforms. SLE Micro is SUSE’s immutable operating system purpose built for containerized workloads. K3s is a powerful but lightweight certified Kubernetes distribution which, when combined with SUSE Rancher and SLE Micro, is ideal for running production workloads across resource-restrained, remote locations or on IoT devices. Together, this stack enables customers to deploy and manage HPE GreenLake workloads at the distributed edge, complementing HPE’s Ezmeral solution.  

Innovate everywhere 

At SUSE, we believe in giving customers the power to innovate everywhere – from the data center to the cloud, to the edge and beyond.  SUSE products are 100% open source, giving customers the power to make decisions according to business requirements and innovation needs, rather than contractual obligations.  With SUSE and HPE GreenLake, customers can innovate with speed and agility, focusing on their business-critical edge applications. In addition, HPE customers can enjoy the same level of support for their SUSE solutions as they do for their HPE solutions via HPE Pointnext. 

SUSE Chief Technology and Product Officer, Thomas Di Giacomo, explains the significance of this extended partnership and what it means for our joint customers: “With HPE, we share the vision that the next wave of enterprise computing is distributed. Due to scale, scope and complexity, customers are increasingly opting for ‘as a service’ models to manage their edge-to-cloud computing. We have a more-than-25-year partnership with HPE, and this expansion will enable our customers to be even more successful in edge-to-cloud deployments.” 

If you’d like to find out how SUSE and HPE GreenLake can support your Enterprise Container Management strategy for greater agility, speed and innovation, contact your HPE rep or take a test drive at: https://testdrive.greenlake.hpe.com/ 

TCS Cognix Enterprise CaaS Solution with SUSE Rancher

Friday, 23 September, 2022

    

 

Introduction:

The following blog has been written by TCS Agile Computing, Cloud & Edge Centre of Excellence (CoE) in collaboration with the Global SUSE GSI team. It examines the TCS Cognix Enterprise Container-as-a-Service (CaaS) solution with SUSE Rancher.

Background:

Businesses are adopting to the post-pandemic era and demonstrating certain inherent characteristics: agility, cloud first and cloud native approach to enabling enterprise operations. CIOs are making efforts to break away from monolithic IT design to ensure that the dynamic business demands could be met through scalable, modular solutions that are portable and that can be easily deployed. The adoption of technologies like containers is on the rise. They aid in easy deployment and portability for business applications. A Containers’ approach can comprise of the production environment, application workloads, and its dependencies, including hardware and middleware, all composed in one modular entity. The benefits of portability, improved availability, reduced server footprint, agility (time to market) and innovation clearly outweigh the complexity of technology and skill gap challenges. As per a CNCF survey 2021, 96% of the organizations are either using or evaluating Kubernetes which help manage containers, including deployment and automation. 55% of the participants are using containers in production in North America. The CNCF survey further calls out the year 2022 to be the watershed year to make containers the choice of deployment platform.

Cloud Native Adoption Challenges:

Organizations are still facing adoption challenges. The challenges can be broadly categorized into following:

  • Technical Architecture decision: Integration, Architecture pattern, Balancing of feature priorities E.g., Automation, Workload portability, Cloud-Native DR, Configuration drift management, Coexistence of declarative and imperative configuration, Degree of commitment to deployment infrastructure (Cloud, On-prem)
  • Solution Architecture decision: This includes the choice of products for orchestration, network choice, security, observability, etc
  • Technical Debt: Existing legacy workload, 3rd party products, Unsupported products, Hosting of non-compliant apps like SSL 1.0 apps, Older CICD technology
  • Evolutionary Challenges: New network capabilities, new security paradigm, serverless patterns, SaaS adoption
  • Technology Gap: Lack of support for opensource distributions and lack of OOB (Out-of-box) capabilities E.g., Multi-cluster management support for opensource Kubernetes, Multi-control plane Servicemesh, Support for Windows containers, Lack of robust edge deployment platform.

Business Challenges:

  • Organizational Practices: Lack of Cost allocation practice in Data Center, Lack of standardization in tools (Observability, WAF), Distributed decision making for platform and tools, Ad-hoc request fulfillment, Organizational Silos, Lack of shift-left practices
  • Skills Gap: Last but one other key challenge is a skills gap and developer onboarding

To illustrate how the above challenges have manifested in business, A multi-national Insurer in the UK was unable to migrate their Windows workload, A leading banker in US is focusing on supporting their multi-protocol workload with secure Servicemesh, and a leading retailer in the UK did not have the governance in place, and must realign the team, A leading bank in EMEA had started their cloud journey but found the costs unmanageable, A multi-national healthcare company in  Europe experienced over-provisioning due to lack of chargeback.

Incorrect decision-making can cost an organization tens of millions of dollars, more critically, it can be regressive to the transformation journey by many months or restrict technical capability.

To set up an enterprise-grade container platform, one must provide its characteristic features. The container platform requires integration with multiple tools for Observability, Storage and Backup and Restore, Disaster recovery, Network components, Security, Multi-cluster management, registry, and DevSecOps toolchain.

Figure 1 – Functional Layers of a Typical Container Platform

The impact of Platform build challenges manifests as below –

Solution Highlights:

TCS Cognix Enterprise CaaS is a holistic solution that enables to overcome adoption and operational challenges in a typical Kubernetes Platform by an Automation first approach right from platform provisioning to Operations. Cognix Enterprise CaaS leverages SUSE Rancher to help customers to provide their consumers a true Container-as-a-service consumption model inclusive of Security, DevSecOps and IaC capabilities across On-Prem, Private and Public Clouds. It comes with prebuilt integrations, OOB (out-of-box) configurations and is Hybrid-Cloud ready.

Cognix Enterprise CaaS powered by Machine First Delivery Model, accelerates Digital Transformation with pre-built Digital Solutions, Assessment Frameworks, Platform agnostic Architectures, Re-usable automation toolkits, focused on Industry specific use-cases.

Following are the highlights of TCS Cognix Enterprise CaaS Solution –

a. Consumer Friendly:

  • Pay as you go: Project teams can consume the containers in seconds and be charged per hour basis.
  • Flexibility: The platform allows to provision resources flexibly, E.g., in granular units of vCPU compared to what a typical public cloud provider can offer.
  • Developers: An integrated pipeline helps the developers to focus on functionality development.
  • Tester: Test environment can be provisioned on demand and scaled down to zero. Configuration drift tracked and reconciled to provide better experience.
  • Business Users: User experience can be tracked using SLIs designed and trackable SLOs.
  • ITOps: A service catalog driven approach, reduces the skill requirements for operations, and helps the team manage larger estate.

b.  Multiple Technology Choice for leverage:

c.  Integration with products to enable key features:

  • For storage integration, the platform integrates with NetApp, which provides storage with high availability, Data Protection, Resiliency, Encryption, Backup, Migration and DR capabilities.

d.  Automation:

  • Service Catalogue driven Day-2 operations provides the shift left push. ITOps activities are automated.
  • Truly automated, the platform can be terminated and recreated within minutes.
  • Follows GitOps principles across Compute, Storage, Network
  • Provides continuous optimization hints, by tracking historical usage
  • Provides customized dashboards for management of multiple clusters and workloads.

Key Features of CaaS Platform using Rancher

Benefits:

TCS Cognix Enterprise CaaS helps customers to provide their consumers a true Container-as-a-service consumption model providing portability of applications across many platforms, Business Resiliency, Technology Agility, Scalability, Security and Strategic Governance, leading to higher productivity and ease of management.

To summarize, TCS Cognix Enterprise CaaS is a secure Enterprise production grade automation first, resilient platform, which adapts to the consumer demand, and delivers IT in both in Cloud and On-prem.

Contact:

For more information on:

Email: agilecomputing.coe@tcs.com

Authors:

Rajan Pillay

Global Head – Agile Infrastructure CoE
Technology & Innovations
Cognitive Business Operations
Tata Consultancy Services

Rajan Pillay is the Global Head of Agile Infrastructure and PRIME for Cognitive Business Operations at Tata Consultancy Services (TCS). He has 26+ years of IT experience in the space of Consulting, Solution Design & Implementation of Data Center, Cloud & Edge Computing.
Ganesh Kumar Kasiviswanathan

Lead – Agile Infrastructure CoE
Technology & Innovations
Cognitive Business Operations
Tata Consultancy Services

Ganesh Kumar leads Container offerings in Agile Infrastructure CoE for Cognitive Business Operations at Tata Consultancy Services (TCS). He has 12 years of IT experience in Consulting, Solution Design & Implementation of Kubernetes solutions.
Soumitra Mandal

Architect – Container & DevOps Solutions
Banking, Financial Services and Insurance, UK
Tata Consultancy Services

Soumitra has around 27+ year experience of IT experience with focus on BFSI Domain and technology architecture with a strong track record of working with Business and IT leaders in large transformation program and focusing on Digital technology solutions towards strategic objectives.

References:

Reduce the Barriers to Entry for Blockchain by Leveraging Kubernetes’ Capabilities

Friday, 9 September, 2022

Distributed ledger technologies (DLT) are changing the nature of doing business and helping companies reimagine how to manage tangible and digital assets.

DLT platforms, including blockchain and associated technologies which include smart contracts, digital tokens, etc., have crossed the chasm of hype and are well on their way to driving real productivity. They are fundamentally changing the nature of doing business across organizational boundaries and helping companies reimagine how they make and manage identity, data, brand, provenance, professional certifications, copyrights, and other tangible and digital assets. In fact, while companies canceled purely speculative blockchain projects during the pandemic, many are doubling down on those that they identified as delivering real, measurable value to the organization.

At the Open Source in Finance Forum which took place in London on July 13, experts across financial services, technology and open source gathered to engage in conversations about how to best leverage open source software to solve industry challenges as well as gain competitive advantages. As the only conference dedicated to driving collaboration and innovation in financial services through open source software and standards it offers the financial services community an excellent opportunity to learn from some of the top visionaries in the industry as well as from each other.

Though I did not have the opportunity to attend the Forum, BTP, an enterprise blockchain company and a SUSE One silver partner, attended & presented. During their session, BTP shared their subject matter expertise on how the barriers to adopting blockchain can be reduced by leveraging Kubernetes. You can watch the recorded video of their session, as I did, without leaving the comfort of my home office or worrying about whether my flight would be canceled.

BTP began the discussion with a brief introduction to the distributed ledger technology landscape, an open source initiative modelled on the CNCF landscape. This helped level-set so everyone in the audience could enjoy their presentation. Next, the presenter explained the increasing importance of Kubernetes as the bedrock in the increasingly complex world of blockchain. Kubernetes is the perfect environment in which to deploy blockchain solutions.

The complexity of blockchain deployments, added to the complexity of Kubernetes deployments, is often a barrier to adoption of this technology. BTP shares their secret sauce on how they address this by automating the deployment and management of DLTs from the Hyperledger Foundation on Kubernetes, overlaying support for the open source smart contract language, Daml, from Digital Asset. The video showcases the benefits of this approach by highlighting their work in the insurance and financial services arena.

What I especially enjoyed about watching their session was the demo which showed the ease of deploying an enterprise blockchain stack on top of SUSE Rancher Kubernetes. Please take a moment to share your success using Kubernetes to deploy an enterprise blockchain stack.

 

30 Fun Facts about SUSE

Friday, 2 September, 2022

Today, SUSE turns 30. From our start in Nuremberg on September 2, 1992 to our record-breaking IPO in 2021 and continued growth, SUSE has always put community first, including our employee community. 

Looking back over my 33 years of employment at SUSE, I have often observed that there are more memories and joys in the journey than the destination. My journey with SUSE, (before it was even SUSE!) has been just that!  

Early in my career, I was inspired to volunteer to manage SUSE’s presence at an upcoming LinuxWorld Conference, a new experience for me. That decision led me down a path of learning and discovery that is indicative of all the opportunities I’ve been given during my time at SUSE.  We are encouraged to embody the spirit of open source, explore new experiences, and grow.  

In celebration of our 30th birthday, we’re sharing fun facts about our journey from 1992 to today. There should be a few surprises for even the most die-hard SUSE fans. Enjoy! 

  1. SUSE is an acronym! S.u.S.E. is a German acronym for Software und System-Entwicklung (software and systems development)  
  2. SUSE loves firsts. We were the world’s first enterprise Linux distribution.  
  3. We went public in 2021. SUSE had the most successful IPO in Europe that year. Watch this video that takes you into the celebration.  
  4. Why is our mascot a chameleon? Check out this video from SUSE’s Emiel Brok to get the answer. 
  5. SUSE believes in education.  SUSE Academic program provides access to training, knowledge, and SUSE open source tools for schools, colleges, universities, academic hospitals, non-profit museums, libraries and more. 
  6. SUSE helps others. SUSEcares, our philanthropic program, enables employees to donate to charities of their choice while engaging in well-being activities.  
  7. SUSE means business. We’re helping our customers improve the world and manage their digital transformation. Learn more at the new SUSE Exchange events launching this month. 
  8. We’re traded on the Frankfurt stock exchange. 
  9. The original SUSE mascot was nick-named Kroete or Kröte (German for “toad”) which might have been the basis for the Old Toad beer released around 2010. 
  10. SUSE loves to celebrate community. SUSECON, our user conference, debuted in North America in 2012. We marked our 10-year anniversary earlier this year. 
  11. We walk the DEI talk. As part of our DEI initiatives, SUSE CEO, Mellissa Di Donato, started the Women in Technology network, the first employee network at SUSE. We now have four, including Pride, Go Green and Open Source Community Citizens. 
  12. Ever been to a Rodeo? SUSE acquired Rancher Labs in 2020. Register for an upcoming Rancher Rodeo to learn more about cloud native infrastructure and security. 
  13. SUSE has planted 3,709 trees in Madagascar as part our SUSE Forest climate initiative. 
  14. Where’s our Grammy? The SUSE Band has been making laugh-out-loud music videos since 2016. Download the Greatest Hits album to hear classics like Uptime Funk and more. 
  15. SUSE is generous. We’re working to raise 30,000 meals for Ukrainians through our Share the Meal challenge. Our employees, customers and partners are making it happen. 
  16. SUSE prioritizes wellness. Our employees love Wellness Wednesdays which highlight physical and mental health, fitness challenges to promote healthy competition, and supporting individual efforts such as charity running events.   
  17. Full circle moment. SUSE’s CEO, Melissa Di Donato, started her career as an SAP R/3 developer, writing apps to run on Linux. Fast forward to 2019 when she became CEO of SUSE and SAP is our most tenured partner. 
  18. Of mice and windmills. The first SUSECON was held in Orlando, Florida, home of DisneyWorld. The first EMEA SUSECON was in Amsterdam. 
  19. Containers have you feeling insecure? SUSE acquired NeuVector in 2021 to help customers shore up their container and Kubernetes security. Learn more. 
  20. SUSE supports women in tech. SUSE’s Women in Technology group provides mentors, guest speakers and allyship. All employees can participate. 
  21. SUSE supports young careers. SUSE Camp is a newly launched early career program that engages our junior employees and helps them find their career track. To celebrate 30 years, they’re spearheading a Go Green initiative in the month of September. 
  22. Remember when software was on disks? SUSE’s first product shipped in a pretty big box. The internet was in its infancy. 
  23. SUSE loves open source and community.  We’re involved in over 50 open source projects in addition to our own.  Join the SUSE Rancher community  to learn more.  
  24. SUSE is committed to climate action. As a company, we’re setting science-based emissions targets and developing roadmaps to execute and deliver these targets.  
  25. Our executives love to have fun. Our CEO, Melissa Di Donato, and our Chief Communications Officer, were in the music video for SUSE Has It. Although the video has been retired, you can download that song and many more from the SUSE Band Greatest Hits album available here. 
  26. We’re not a cat, but we’ve had many lives. SUSE has been independent for much of its history, but operated as part of Novell and MicroFocus at different points in time. Now we’re a public company. 
  27. SUSE and Google share a former CEO. Prior to his stint at Google, Eric Schmidt was the CEO of Novell, which acquired SUSE. He drove the bulldozer that broke ground at the company’s then-new Provo office. This was a great day and now we’re moving into a new office this month. 
  28. We made the most of the lockdown. The entire SUSE acquisition of Rancher was conducted virtually. None of the parties met face-to-face until after the deal closed. 
  29. SUSE people love SUSE. I am one of them! As we celebrate 30 years at SUSE, we have employees who are even older than the company celebrating 36, 33, and 29-year work anniversaries with many more in the teens. I just celebrated 33 years at SUSE. 
  30. SUSE is the future. Our DNA is Linux, but we are blazing trails in the cloud native world.  

What a ride. Can’t wait for the next 30. 

Celebrating 30 Years of Openness

Thursday, 1 September, 2022

Celebrating 30 years of Openness 

Today is a milestone anniversary for SUSE. We are celebrating 30 years of open source innovation, and the people who made it possible: our employees, community, partners, and shareholders. 

For the past 30 years, SUSE’s impact as an open source trailblazer is as globally relevant as it is longstanding. We’ve continued to be a driver of innovation in open source – from CAT scans, life-preserving vaccines, and space missions to every-day comforts such as smart retail experiences, remote learning, and autonomous vehicles. I’m so proud of how of the foundation we have built continues to shape the technology that the world uses every day.  

None of it would have been possible without each member of our SUSE community. I thank you all. 

To our employees: 

Today is not just a celebration of SUSE’s 30th birthday, it is an opportunity to recognise remarkable individuals from around the world that make us who we are today, and who have built SUSE over three decades. After three years as SUSE CEO, I remain inspired and humbled by the hard work and dedication of the SUSE team, as we continue to transform the open source landscape. 

I’m proud of our employees’ support of SUSE’s global community. Initiatives such as the SUSE Forest, to fight climate change; SUSE Women in Technology, which helps women find connections, support and growth opportunities; Share the Meal, which donates meals to those affected by the war in Ukraine; and SUSECares, which supports various charitable organisations nominated by employees, all demonstrate the heart and compassion of the SUSE family. I am honoured to lead this talented, passionate, and engaged team. 

To our community: 

Open source software relies on a community to flourish, and SUSE is a community-first organisation. Forums such as openSUSE, the SUSE & Rancher Community, and the Kubernetes Security group bring technology practitioners and open source experts together to share information and solve problems. Some of the best ideas for future products emerge from our collaborative and supportive community of thousands of open source advocates. 

Openness is at the centre of everything we do at SUSE, and this open ethos will remain embedded in our culture. Whether you came to us 30 years ago for an enterprise Linux distribution (we were the first!), know us from the Kubernetes world through SUSE Rancher and SUSE NeuVector, or are an edge innovator seeking the next big thing, we thank you for your support and trust. 

To our partners: 

We are grateful to have a network of technology and distribution partners who represent the best brands in the industry. From our first partner, SAP, to Microsoft, HPE, Amazon, Dell, Fujitsu, Google, AMD, and many more, we grow and evolve from our relationships with you. Our customers benefit from your technology leadership and high service levels, enabling them to reach new heights in their businesses. Our achievements over the past 30 years owe much to you and your teams. 

To our shareholders: 

Thank you for trusting SUSE and our vision for the future of open source technology. Your investment led us to be the most successful enterprise software IPO in Europe in 2021, and fuels our growth as we tackle the IT infrastructure challenges of today and the next 30 years. 

The spirit of openness at SUSE continues to thrive, providing a foundation that addresses today’s business challenges, and positions us to be future-ready. We lead the way in Business-Critical Linux, Enterprise Container Management, and Edge technology. Whatever comes next, we are ideally positioned to deliver solutions that enable our customers to change the world. 

 

An Edge Vision for the Metaverse

Wednesday, 31 August, 2022

Having hundreds of interconnected devices in what we now call edge environments is nothing new. In manufacturing and engineering facilities, PLCs (programmable logic controllers) have been attenuating and monitoring industrial devices since the invention of the microchip. What’s different now is that the concept of what will comprise a network node is changing rapidly, along with the number of interconnected devices.

Some form(s) of “the metaverse” and Web 3.0 will probably exist in a few years, like or loathe the idea. Web 2.0 primarily allowed users to interact with one another using the keyboard/touchpad, microphone, and webcam/phone cam. Web 3.0 will introduce new dimensions of interaction, such as virtual and decentralized 3D worlds and experiences. This will also trigger an influx of new gadgets, such as haptic gloves, that will allow you to feel objects in the metaverse – all of which will be made possible by edge devices and applications.

As we think about it now, the edge will be ubiquitous and likely be populated with thousands of devices. This raises new questions about managing and operating these devices in a consistent, reliable, and secure manner. After all, you wouldn’t want your haptic glove to misbehave in the metaverse or your autonomous vehicle sensors to be hijacked by malware.

 

Operating Systems for the edge

As more devices come online, their management and security will be front-of-mind for administrators. Separating system and application spaces is already gaining followers for immutable Linux operating systems – even in consumer devices. For small-edge devices, similar methodologies will likely leverage containerized architectures. Operating Systems optimized for speed, security and immutability should provide the base for interoperable, platform-independent edge devices and applications.

SUSE Linux Enterprise Micro (SLE Micro) is an example of such an OS that is lightweight, secure, maintenance free and tailor-made for container-based edge workloads. It automates mundane but important management tasks for edge devices, such as updates, rollbacks, and recoveries. Its light footprint ensures that battery-operated devices can last longer. Developers can also quickly experiment and code on SLE Micro to build apps ranging from wearables to smart cities, transportation, and many more.

 

Compute for the edge

Edge devices and applications will generate a significant amount of data that needs to be processed and analyzed in real-time to provide an optimal end-user experience. For example, field engineers can perform remote monitoring with digital twin virtualization by capturing and streaming live sensor data via AR-enabled glasses back to the main office and receiving real-time remote guidance.

Thus, edge computing resources need to reside closer to where the edge devices are to meet low latency requirements. This could mean being close to your decentralized 3D worlds. The edge infrastructure also needs to be elastic, reliable and fault tolerant. Kubernetes is a sensible technology that scales well and is self-healing. SUSE Edge is an example of a certified lightweight and secure Kubernetes solution ideal for running at edge locations.

 

Secure code for the edge

One aspect of developing new generations of software that will be a part of some metaverse-like future is ensuring that developers’ code running on edge devices and compute infrastructures is secure. To do that, we must continuously scan our code for vulnerabilities, inspect traffic in real-time for suspicious behavior, protect sensitive data and automate security policies. SUSE NeuVector is one such cybersecurity platform that does all these and protects edge applications from development through QA and production environments.

The Edge and the Metaverse have a symbiotic relationship. Each is enabling and strengthening the other to create new and mind-boggling possibilities. Key aspects of user experience, such as low latency, automated maintenance, uninterrupted operation, and security, will underpin their success and mainstream adoption. We will require new approaches to deploying data centers and maintaining, managing, and securing software. Technology partners such as SUSE provide innovative open-source solutions to satisfy such edge computing requirements.

 

About the Author

Vishal Ghariwala is the Chief Technology Officer for the APJ and Greater China regions for SUSE, a global leader in true open source solutions. In this capacity, he engages with customer and partner executives across the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also has a global charter with the SUSE Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.

Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.

Vishal has over 20 years of experience in the Software industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.

Vishal is here on LinkedIn: https://www.linkedin.com/in/vishalghariwala/

HPE and SUSE at the Edge

Thursday, 25 August, 2022

Webinar: HPE and SUSE at the Edge

It is no surprise that businesses are constantly trying to compete through digital services, where customers engage. Businesses need to move faster, with flexibility, and simplicity. Join this webinar to learn how HPE and SUSE are helping customers across all industries move faster to meet the demands of their customers and IT trends. Register now