Introducing the SUSE Edge

Tuesday, 18 May, 2021

SUSE is proud to announce ‘SUSE Edge’ a full stack solution to help organizations build the next generation of cloud-ready and cloud-native intelligent edge products.

Edge applications are driving digital transformation

Adoption of intelligent edge applications is transformative for many industries, such as Autonomous Vehicles, Industrial Robotics, Drones/UAV, and Utilities (Smart Grid), to name a few. Fueling the adoption is also the tremendous growth in use of Industrial IoT devices, which are expected to double within 4 years (2020-2024) and are becoming significantly more complex. Both scale and complexity of the edge device are key, as the number of global active IoT connections touches 24B by 2024*.

Challenges at Edge

However, the market has yet to offer a fully managed edge solution.  So, it is no surprise that organizations implementing applications at the edge, observe some common challenges:

  • Lack of a consistent platform from core, to cloud, to edge
  • Concerns about security, privacy and compliance
  • Breadth and complexity of edge use-cases

Keeping the perspective of complexity and scale, SUSE took a customer-centric approach in addressing the edge use cases – What do the customers have today? What would they need for tomorrow?

With two design goals in mind: Remove complexity (deployment, administration, operations) and Design for scale, SUSE’s approach was born in the cloud and meets the edge wherever it is – near the data center, at a remote/branch location or the IoT/Edge devices themselves.

What is SUSE Edge?

SUSE is creating an open source, cloud native solution for full stack edge infrastructure management. A true open-source solution for full stack Edge infrastructure management, with following 3 foundations:

  • Lightweight cloud-native edge stack, which is also Kubernetes-ready
  • Reliable & secure edge infrastructure
  • Aim for maintenance-free infrastructure 

Lightweight Kubernetes at the Edge

SUSE Edge utilizes K3s – a CNCF sandbox project that delivers lightweight Kubernetes distribution fit for resource constrained and remote location or IoT devices.

K3s was built by the SUSE Rancher team and donated to the CNCF in August 2020. K3s is production ready and packaged as a single binary optimized for ARM64 and ARMv7 support.

When used with SUSE Rancher, K3s provides users with an exceptionally reliable, comprehensive Kubernetes experience that confidently manages thousands of clusters across the Edge. Using SUSE Rancher’s GitOps-powered Continuous Delivery features, K3s users can manage up to 1 million edge clusters built on x86 or ARM64-based hardware with maximum consistency and efficiency.

Longhorn, also a CNCF project, is used to deliver a powerful, distributed, software-defined storage platform for Kubernetes that can run anywhere. When combined with SUSE Rancher, Longhorn makes the deployment of highly available persistent block storage for your edge-based Kubernetes clusters easy, fast, and reliable.

By supporting both x86 and ARM64 architectures, Longhorn is the first Kubernetes-native storage solution designed to help teams store data reliably within even the most remote, low-powered environments at the edge.

Operating System Built for Edge

100% open source and built using open standards, SLE Micro provides a reliable and secure OS Platform for the Edge. SLE Micro is built from ground up to support containers and microservices.

SLE Micro leverages the enterprise-hardened technology components of SUSE Linux Enterprise and merges that with what developers want from a modern, immutable OS platform to provide an ultra-reliable infrastructure platform that is also simple to use.

SLE common code base provides FIPS 140-2, DISA SRG/STIG, integration with CIS and Common Criteria certified configurations. Fully supported security framework (SELinux) with policies is included.

Both Arm and x86 architectures are supported so you have architectural flexibility in deploying a broad range of edge applications.

Near Zero Maintenance

Our goal is zero maintenance – all routine maintenance functions like patches, updates, config changes are performed seamless. When things go wrong, security signed and verified transactional updates are easy to rollback.

SUSE Rancher’s Continuous Delivery utilizes a ‘GitOps’ approach to help users manage and deploy thousands of Kubernetes clusters easily. Driven by project ‘Fleet’, Rancher Continuous Delivery gives users the ability to manage Kubernetes at the Edge across any infrastructure environment.

More Operational Benefits

In addition to the above benefits that organizations realize from an “Open, Interoperable” solution, following are some of the operational benefits from point of view of “IT operations at the Edge”.

  • Ultra-low Latency
    By bringing advanced workloads as close as possible to where they’re delivering value, edge deployments from SUSE eliminate network latency and provide customers with instantaneous compute, storage, and container services.
  • Reduce Bandwidth
    SUSE works with some of the world’s largest enterprises to deliver on the promise of 5G. Our solutions enable them to process data faster than ever before, ideal for connected vehicles, robotics, VNF/CNF (Virtual Network Function/Cloud native Network Function) workloads, and other business-critical scenarios.
  • Enhance Security & Privacy
    By reducing the amount of data that has to travel over a network, SUSE edge solutions mitigate the risk of being intercepted by nefarious actors. Similarly, as data collection is distributed across multiple devices rather than stored in one place, our customers benefit from a robust security and privacy posture.

SUSE Edge solution serves a broad set of use cases ranging from organizations that are cloud-ready to organizations ready for cloud-native. The solution is modular. When combined with SUSE Manager or an open source management tool you can use for edge use cases that are not fully containerized. For edge use cases that are fully containerized and cloud-native, SUSE Rancher can enable managing the lifecycle of edge-scale Kubernetes deployments.

Learn More

To dig deeper into emerging use cases for running Kubernetes and adaptable Linux at the edge, we welcome you to watch the sessions, case studies, demos and conversations with open source experts at SUSECON Digital 2021: https://susecon.com/.

 

 

* BCG LinkedIn post “Open Source powers digital transformation”

Kubernetes for the Edge: Key Developments & Implementations

Tuesday, 11 May, 2021

Kubernetes is the key component in the data centers that are modernizing and adopting cloud native development architecture to deliver applications using containers. Capabilities like orchestrating VMs and containers together make Kubernetes the go-to platform for modern application infrastructure adopters. Telecom operators also use Kubernetes to orchestrate their applications in a distributed environment involving many edge nodes.

But due to the large scale of Telco networks that includes disparate cloud systems, Kubernetes adoption requires different architectures for different use cases. Specifically, if we look at a use case where Kubernetes is used to orchestrate edge workloads, there are various frameworks and public cloud-managed Kubernetes solutions available that offer different benefits and give telecom operators choices to select the best fit. In a recent Kubernetes on Edge Day sessions at KubeCon Europe 2021, many new use cases of Kubernetes for the edge have been discussed along with a showcase of cross-platform integration that may help enterprises adopting 5G edge and telecom operators to scale it to a high level.

Here is a high-level overview of some of the key sessions.

The Edge concept

Different concepts of edge have been discussed so far by different communities and technology solution experts. But when Kubernetes is coming into infrastructure, IT operators need to clearly understand the key pillars on which the Kubernetes deployment will seamlessly deliver low latency performance in telco or private 5G use cases. First, there should be a strong implementation of Kubernetes management at scale. Second, operators need to choose the lightweight K8s for edge solution, preferably certified by CNCF. And third, a lightweight OS should be deployed at every node from Cloud to the far edge.

Microsoft’s Akri Project: Microsoft’s Akri project is an innovation that will surely break into multiple Kubernetes-based edge implementations. It discovers and monitors far edge devices of brownfield devices that cannot have their own compute – can be a part of Kubernetes cluster. Akri platform will let these devices be exposed to the Kubernetes cluster.

AI/ML with TensorFlow: TensorFlow is a machine learning platform that takes inputs to generate insights. It can be deployed on the cloud, on-premises, or edge nodes where ML operations need to perform. One session showed that Kubernetes clusters deployed in the cloud and edge can host analytics tools set (Prometheus, EnMasse/MQQT, Apache Camel, AlertManager, Jupyter, etc.) to process ML requests with the lowest latency.

Architectures for Kubernetes on the edge: While deploying Kubernetes for an edge, many architecture choices are varied per use case. And each architecture poses new challenges. But the bottom line is that there is no one-size-fits-all solution as various workloads have different requirements and IT teams focus on connecting network nodes. So, the overall architecture needs to evolve into centralized and distributed control planes.

Robotics: Kubernetes has also been implemented in Robotics. Sony engineers have showcased how the K8s cluster systems can be used for distributed system integration of robots and to perform specific tasks collaboratively.

Laser-based Manufacturing: Another interesting use case discussed by Moritz Kröger, a Researcher at RWTH Chair for Lasertechnology leveraged a Kubernetes-based distributed system. Kubernetes features like automation configuration management and flexibility in moving workloads in clusters give operational benefits to Laser manufacturing machines.

OpenYurt + EdgeXFoundry: OpenYurt is yet another open source framework that extends the orchestration features of upstream Kubernetes to the edge. It is showcased that – it can integrate with EdgeXFoundtry in 5G IoT edge use cases where EdgeXFoundtry is used to manage the IoT devices and OpenYurt is used to handle server environments using OpenYurt plugins set.

Using GitOps: Kubernetes supports the cloud native application orchestration and declarative orchestration. Applying the GitOps approach to achieve the Zero Touch Provisioning at multiple edges from the central data center is possible.

Hong Kong-Zhuhai-Macao Bridge: Another use case discussed is – Kubernetes is implemented in edge infrastructure for managing applications that are managing sensors at Hong Kong-Zhuhai-Macao Bridge. The use case is unique as it focuses on defining the sensor devices on the bridge as CRD in Kubernetes, associating each device with the CI/CD, and managing and operating the Applications deployed on edge nodes.

Node Feature Discovery: Many end devices can be part of thousands of edge nodes connected to data centers. Similar to the Akri project, the Node Feature Discovery (NFD) add-on can detect and push into Kubernetes clusters to orchestrate with edge servers and cloud systems.

Kuiper and KubeEdge: EMQ’s Kuiper is open source data analytics/streaming software that runs on edge devices with low resource requirements. It can integrate with KubeEdge where we get a combined solution that leverages KubeEdge’s application orchestration capabilities and streaming analytics. The combined solution delivers low latency, saving cost on bandwidth, ease in implementing business logic, and operators can manage and deploy Kuiper software applications from the cloud.

What Comes After Kubernetes?

Friday, 23 April, 2021

What Comes After Kubernetes?

You probably can’t believe I’m asking that question. It’s like showing up to a party and immediately asking about the afterparty. Is it really time to look for the exit?

No…but yes.

We used to deploy apps on systems in data centers. Then we moved the systems to the cloud. Then we moved the apps to containers. Then we wrapped it all in Kubernetes for orchestration, and here we are.

  • Have we arrived at PEAK IT?
  • Where do we go from here?

Each advance in technology unlocks doors we couldn’t reach before. As we move from room to room, we’re shifting gears, turning our momentum into energy to go faster and further.

Moving faster requires that we pay more attention to the road ahead, and it’s hard to do that while building the vehicle to take us there and building the road itself.

Whether you’re a business working on products for tomorrow’s world, or an individual who wants to know what skills will advance your career, you’re actually seeking leverage. Leverage gives you an edge over your competitors, and in today’s world, everyone is your competitor.

SUSECON Is Your Map to the Future

SUSECON, from May 18-20, 2021, is the first SUSE event that includes the people and products from Rancher. It packs the content of three events into a single digital platform with three worlds: LinuxWorld, KubeWorld and EdgeWorld.

Each world focuses on the solutions and strategies that its inhabitants care most about:

  • How does Kubernetes enable the next frontier of computing? (This information will shape your business decisions and career choices).
  • What are businesses doing to position themselves as trailblazers in the new frontier, and how can you follow in their footsteps?
  • What is adaptable Linux, and how can it drive digital transformation?

Within each world are keynotes delivered by SUSE leadership and customers from both SUSE and Rancher. Dozens of sessions range from introductory-level tutorials to advanced use cases for specific niche applications across Linux, Kubernetes, and Edge.

Every session was hand-picked to meet the needs of our diverse audience, from beginner to advanced, across topics that include:

  • AI/ML
  • Infrastructure and Operations
  • DevOps
  • Edge and IoT
  • Kubernetes
  • Linux
  • Open Source
  • Business Strategy
  • and more…

If you have questions, SUSECON is where you will find strategic answers.

Open Source Matters

Rancher and SUSE are both innovation leaders, and the combined company is a creative powerhouse. In just a few short months, developers have created solutions for real issues that everyone in the industry faces. These are core issues that slow developer and operations teams, and when solved, the entire organization will move faster.

  • How can I implement security policies in Kubernetes without increasing complexity or making my clusters harder to manage?
  • What can I do to protect myself from a supply chain attack on an upstream container base image?
  • What are the new features in Rancher 2.6?
  • How can I deploy hyperconverged infrastructure (HCI) without paying crippling license fees?
  • How can I use AI/ML to detect and respond to events before they become outages?
  • How can I help my developers build and deploy apps on Kubernetes without them having to learn everything about it?

At SUSECON, we’ll introduce you to projects that answer those questions, along with others that solve even more problems. These are all open source, built to help you succeed.

Open source is in our DNA. It’s the key to the democratization of opportunity, the single most effective solution to level the playing field and reward businesses for generating value. At SUSECON, you’ll learn just how important this is to us, with insights on:

  • Why is it important to be both open and interoperable?
  • What the word “open” means in “open source” (and how other companies use the term to trick you).
  • Why is Linux leadership essential to Kubernetes innovation?
  • How is freedom different from choice, and how does one complement the other?

SUSECON Is Your Event

SUSECON is a conference like none other you’ll attend this year. With actionable information in every session, you’ll leave the event with a plan for your future, and you’ll know the steps to take next on your journey.

I’m excited about it. When SUSE acquired Rancher, there were concerns that Rancher users would lose the freedoms they had. We promised you that wouldn’t happen, and SUSECON is our chance to show you the full power of the combined organization. Not only is Rancher still free and open source, but there is also a non-stop torrent of open source software that we’re adding to the portfolio. Any of those projects could change your world as much as Rancher, K3s, RKE and Longhorn did.

Head over to the event site to browse sessions and sign up for free.

Tags: ,,, Category: Products, Rancher Kubernetes Comments closed

Making Automation a Reality with SUSE Manager

Tuesday, 13 April, 2021

Once upon a time, automating tasks on Linux servers was as simple as crafting a few Bash scripts and cron jobs. For some tasks, that process still works fine. You could create a simple backup script and run it with cron every night. Done and done. But with today’s increasingly challenging software stacks, the old Bash script approach won’t cut it. Instead, you need a real automation tool that can handle increasingly complex tasks and work with multiple platforms, clusters, containers, and more.

SUSE has a solution for that, one that can not only make the IT admin job easier but allow for automation such that your servers can always be up to date and remain in very specific states. With SUSE Manager in place, you can automate system-wide software updates, package and system deployments, image building, reboots, patching, configuration changes, and much more.

If you want to make automation a reality for your business, here is a look at why SUSE Manager might be the best route to success.

A Caveat
One thing you should understand about SUSE Manager is that, although it has an outstanding web-based interface, automation is not necessarily a point-and-click affair. You can certainly automate some basic tasks (such as updates and patch rollouts), but to build in a level of automation that will be truly useful for your enterprise business, you’ll need to work with Salt and even the command-line interface. For that, your admins will need to be properly trained by SUSE.

With that out of the way, let’s take a look at why you might want to add automation into your daily SUSE Manager routine.

More Reliable Security for Your Systems
One of the biggest issues admins regularly face is keeping up with server updates. This is especially true when your company has a large data center or multiple data centers hosted in different locations (even in multiple countries). When your business is supported by hundreds or thousands of server deployments, it’s on your admins’ shoulders to keep them running reliably and securely.

Now, imagine those admins have to manually work through an entire data center, filled with servers, and update the software and operating systems one-by-one. When this is the case, those updates tend to get pushed back (at best) or forgotten (at worst), to deal with more pressing issues.

This becomes a problem when those updates contain crucial security patches. If those patches aren’t applied, your servers remain vulnerable.

That’s where SUSE Manager comes into play. With SUSE Manager your team can administer, deploy, configure, monitor, and audit all of your Linux systems, whether they are running on bare metal or in a virtual environment. With the help of the built-in automation and orchestration features included with SUSE Manager, you can extend and expand the power of a single administrator. Not only does this have the effect of making it exponentially easier for your admins to keep your systems more reliably secure (by scheduling the deployment of updates and patches), it can also minimize staffing costs and reduce the time for system deployment and updates, even in complex DevOps scenarios.

Keeping Systems in Specific States
SUSE Manager includes the tools to ensure your systems always remain in the same state. States are the basic building blocks for Salt (and thereby, much of the automation within SUSE Manager). States are stored in SUSE Manager as “State Configuration Channels.” For example, you could create a template that will ensure your system services are always running and updated to the latest release. With the help of action chains, this process can not only be automated, but it can also prevent catastrophic failures.

What is an action chain (Figure 1)? Simply put, it’s a chain of events that occur sequentially. If one event fails, the remaining events do not happen. For example, you could create an action chain that does the following:

1. Stops the database
2. Applies update to the database
3. Starts the database
4. Extends the schema
5. Starts the application that uses the database

Figure 1: Creating an action chain in SUSE Manager.

Let’s say you’ve created the above action chain and then created it to run on a schedule (automated). One day the database refuses to stop. When that happens, the next event in the chain (updating the database) doesn’t launch. These types of chains are crucial for when your business requires a set of events to occur and either always succeed or stop before a single failure would bring your business to a standstill.

SUSE Manager also makes it possible to create, manage, and schedule highstates for your systems, which are numerous states combined in a single “manifest” of states. For example, you could create a state that will install the Apache web server, create a new site called mywebsite.com, enable the site, and make sure the server is running. The Salt manifest for that might look like:

apache2:
pkg:
– installed

mywebsite:
file.managed:
– name: /etc/apache2/sites-available/mywebsite.com
– source: salt://mywebsite.com

a2ensite mywebsite.com:
cmd.wait:
– unless: test -L /etc/apache2/sites-enabled/mywebsite.com
– watch:
– file: mywebsite

apache2:
service.running:
– watch:
– file: mywebsite

Once you have your highstate created, you can then automate it by way of a schedule or an event (such as a system coming online).

Salt formulas go one step further and allow you to fill in key configuration parameters, and then the states will be built out for you. With the help of Salt Formulas, SUSE Manager is capable of delivering system monitoring with Prometheus and Grafana. SUSE Manager includes several formulas that can be found in the Formulas tab within System details. You’ll find Salt Formulas for the likes of dhcpd, openVPN, Bind, PXE, vsftpd, CPU Mitigations, and more.

Faster System Deployments
Under normal circumstances, rolling out a bare metal server takes time. Although deploying virtual machines is far faster, even that can be time-consuming when you’re looking at configuring and spinning up numerous instances. This is made even more complicated when you have a larger business with multiple locations. With thousands of machines to deploy and manage, doing things the manual way is no longer an option.

With SUSE Manager, automating those deployments can completely redefine the process. With a single tool, you can manage incredibly complex system deployments, no matter their location. Imagine being able to deploy complex heterogeneous environments, with extended target OS support, all from a single point of entry? Even better, with automated Linux server provisioning, patching, and configuration, your staff is capable of faster, consistent, and repeatable server deployments.

With the help of Automated Hardware discovery (using PXE boot) and Autoinstallation (Figure 2), your IT staff is better equipped to more efficiently and effectively onboard new hardware.

Figure 2 – The SUSE Manager Autoinstallation tool can help you create automated deployments.

You’ll reduce operational costs, as well as errors.

That’s what SUSE Manager can do for you.

Content Lifecycle Management
One of the more important features of SUSE Manager is the Content Lifecycle Management (CLM) tool. Although you cannot directly set up automation with this portion of SUSE Manager, you can clone vendor channels and then modify the cloned channels to include only the packages you want to be installed on a client (Figure 3).

Figure 3 – Filtering packages within Content Lifecycle Management makes it easy for you to allow or deny packages from channels.

A channel clone (a project) defines the required software channel sources, the filters to be used to find packages, and the build environments for the packages. Once you have a channel specified to the needs of a particular rollout, you can then automate that rollout with the help of Salt.

How can this help to empower your staff? With CLM, your SUSE Manager admins can create projects for automated monthly patch cycles, filtered to meet the exact details of every department, every branch, and every server in your business. With the help of Live Patching, those deployments can be done, even on a kernel level, without incurring downtime during business hours.

By employing the SUSE Manager CLM tool, you can expect consistent rollouts that are perfectly tailored to meet the needs of your business, all the while automating the process.

At the heart of this automation is Salt.

What Is Salt?
To put it simply, Salt is a remote execution engine, configuration management, and orchestration system, capable of maintaining remote nodes in defined states. What is a defined state? Let’s say you’ve defined a particular state that requires a specific software package to be installed and that its services are running. To define and deploy these states, SUSE Manage uses Salt Formulas (which are a bit higher level than States), which are collections of Salt States that have been written by your SUSE Manager team or by other (third-party) users. Salt States contain generic parameter fields that allow you to define reliable, reproducible configurations repeatedly and automatically.

By creating the proper Salt formula, you can automate the installation of those packages and make sure their services are running.

The Salt Formula Catalog can be found in SUSE Manager at Salt > Formula Catalog (or under any system or System Group in the ‘Formulas’ tab). There you can view any of the currently installed Salt formulas (Figure 4).

Figure 4 – The SUSE Manager Salt Formula catalog.

You can use Formulas from within Salt States with the require declaration, like so:

include:
– epel

python26:
pkg.installed:
– require:
– pkg: epel

By using formulas, you can (with the help of SUSE Manager) automate simple, repetitive, or incredibly complicated tasks. But at its heart, Salt lets you:

● Run commands (at a granular level) on remote systems in parallel (so multiple commands at once).
● Use secure and encrypted protocols.
● Use small and fast network payloads for more efficient (and reliable) results.
● Provide a simple programming interface.

At this point, you might be thinking this is a bit too challenging for your staff. Do they have the time to learn how to create Salt Formulas? If not, it’s possible to easily install specific Salt Formulas by way of RPM packages. From the command line, you can (using the zypper tool), search for available formulas (such as zypper se –type package formula). When you find one that meets your needs, it’s as easy to install as:

sudo zypper in FORMULA

(where FORMULA is the name of the formula to be installed).

But the truth is, your business isn’t going to get the most out of SUSE Manager without a full understanding of how Salt works and how to create or implement Salt Formulas. The good news is that SUSE is quite adept at creating custom Salt Formulas. So when your business requires a highly customized Salt Formula to keep your company systems at specific states, they can either guide you in writing the formula or create it for you. That’s the SUSE way.

Salt or GUI?
At this point, however, you’re probably wondering, “Will my SUSE Manager admins have to write Salt formulas for everything?” Not with basic SUSE Manager usage. Your admins can get by using just the GUI for a lot of basic tasks. But to get the most out of this powerful tool, Salt will be required, especially when you’re looking to automate more and more complex tasks.

At this point, you’re probably thinking, “But my admins don’t have time to learn Salt.” That’s an understandable position to be in. However, when you realize how powerful Salt can be, you’ll want those admins empowered with the tool best suited to keep your business efficient and scalable.

Think about it this way: Once your admins have a solid grasp on how Salt works, they could spend a single workday crafting a Salt template and use that template to automate or schedule the provisioning or updating of thousands of servers. So a bit of effort and investment upfront is going to save your business serious time and money in the long run.

In the end, it’s not a question of Salt Formulas/templates or a GUI; it’s both. You’ll eventually get to the point with SUSE Manager that you’re creating complex Salt templates that handle increasingly complicated tasks, adding those templates to SUSE Manager, and then automating them by way of scheduling.

With this combination, you can automate tasks like:

● Provisioning
● Patch updates
● Deploying Kubernetes clusters
● Automating a server to the next major OS version
● Installing software
● Running any remote command

With the help of action chains, you can create combinations of the above that will run in succession, for more reliable administrative tasks.

How can your company afford to miss out on that kind of power?

Multi-Tier Architecture vs. Serverless Architecture

Monday, 12 April, 2021

You’ve undoubtedly come across some terms like three-tier application, serverless framework and multi-tier architecture in your knowledge-seeking journey. There’s a lot to keep up with regarding application design. In this post, we’ll briefly compare serverless and multi-tier architectures and look at the benefits of serverless over traditional multi-tier architectures and vice versa.

Before diving into our comparison, let’s look at the unique components of each architecture.

What is Serverless?

The best way to define serverless is to look at where we’ve come from in the last five to ten years —  multi-tier architecture. Historically, when designing software, we plan that software to be run on a particular runtime architecture. Serverless is an approach to software design for client-server applications. It refers to software that runs on a single computer or a group of computers hosting an application that connects to a remote system such as a browser or mobile phone. Business logic is executed on the server systems to respond to that client, whether it’s a phone, browser, etc.

For example, let’s look at your bank’s website. When you connect to their website, you connect to a software application running on a server somewhere. Odds are it’s running on a mini server, and it’s in a complex environment that’s performing the functions of the bank’s website. But for all intents and purposes, it’s a single application. You gain value from that application because you can conduct your online banking transactions. There is logic built into that application that performs various financial transactions; whatever you need is fulfilled by the software running on their servers.

Serverless offers a way to build applications without considering the runtime environment. You don’t need to worry about how the servers are placed into various subnets or what infrastructure software runs on which server. It’s irrelevant. But that hasn’t always been the case, and that’s where we get multi-tier architecture.

What is Multi-Tier?

Let’s say you work at a bank and you need to write a software application for an online banking service. You don’t want to think about how the bank will store the data for the various baking transactions. The odds are that it exists in a database somewhere like FaunaDB (a serverless database service). You’re not recreating the bank’s enterprise reporting software. You’re simply looking to leverage that existing data. Traditionally,  we design software as a multi-tier architecture. That is a runtime architecture for client-server applications composed of tiers. So there can be several different tiers depending on how you approach a particular problem, but generally speaking, the most common tiers are presentation, application and data. Let’s explore those.

  • Presentation Tier: This is the actual UI of the application. It uses something like RedwoodJS, React or HTML+CSS to provide the visual layout of the Data. Part of the application handles displaying that information in some shape or form.
  • Application Tier: This tier passes information to the presentation tier. It processes how we manipulate the data to services. For example, if we need to show a list of banking transactions by date, the application tier handles the date sort and other business logic our application requires.
  • Data Tier: This tier handles getting and storing the data that we are manipulating within our application.

Multi-Tier Application Architecture

I’ve outlined the basics of multi-tier,  a common approach for software development. Understanding where we come from makes it easier to understand the benefits of serverless. Historically, if we were writing software, we’d have to think about database servers, application servers and front-end servers and how they handle different tiers of our application. We’d also have to think about the network paths between those servers and how many servers we need to perform the necessary functions. For example, your application tier may need a substantial number of servers to have the computing power to do the business logic processing. Data tiers also historically have extensive resource needs.

Meanwhile, your front end might not need many servers. These are all considerations in a multi-tier software design approach. With serverless, this is not necessarily the case. Let’s find out why.

Serverless Fundamentals

Before we jump into architecture, let’s familiarize ourselves with several serverless components.

Backend-as-a-service(BAAS)

With the evolution of the public cloud and mobile applications, we’ve seen a different application development approach. Today mobile app developers don’t want to maintain a data center to service their clients. Instead, they’ve designed mobile applications to take advantage of the cloud. Cloud vendors quickly provided a solution to this in the form of a backend as a service. Backend as a service is a cloud service model where server-side logic and state are hosted by cloud providers and used by client applications running via a web or mobile interface. Essentially, this is a series of APIs hosted in the cloud. Let’s say I’m working on a web application and need an authentication mechanism. I can use AUTH0’s cloud-hosted APIs. I don’t need to manage authentication on my servers; AUTH0 handles it for me. At the end of the day, all APIs hosted on the cloud craft a URL, make a rest request to get some data and execute it.BAAS lays the foundation for serverless.

Functions as a service (FaaS)

Functions are just some code that performs a super-specific task, whether it be collecting a user id or formatting some data for output. In the Faas cloud service model, business logic is represented by event triggers, WhileBaaS using APIs from cloud providers, with FaaS, you provide your code, which is executed in the cloud by event-triggered containers that are dynamically allocated and ephemeral. Since our code is event triggered, we don’t have to start the application and wait for a request. The application only exists when it’s triggered; something has to make it spin up. The best part is you have to define what that trigger is. Containers provide the runtime environment for your code. In the true nature of serverless, there aren’t servers; services handling your request only get created when a request comes in that it needs to handle. Faas is also dynamic, so you don’t have to worry about scaling when you get a traffic spike. Cloud providers handle scaling the application up and down. The last thing to keep in mind is that the containers running our code are ephemeral, meaning they will not stick around. When the job is done, so are they.

Serverless Architecture:

Serverless is a runtime architecture where cloud service providers manage the resources and dynamically allocate infrastructure on demand for a given business logic unit. The key to a serverless application is the application runs on a seemingly ethereal phantom infrastructure that exists yet doesn’t. Serverless uses servers, but the beautiful part is you don’t have to manage or maintain them. You don’t have to configure or set up a VPC, set up complex routing rules, or install regular patches to the system to get high-performance and robust applications. The cloud providers take care of all these details, leaving you to focus on developing your application.

Basic Serverless Architecture

Developing an application with serverless takes a lot of overhead. You pay every time your code is triggered and for the time it runs to the cloud provider.
When creating a serverless application, take appropriate measures to protect it from unwanted high traffic, such as a DDOS (Distributed Denial Of Service) attack that could spin up a lot of copies of your code and increase your bill.
Your application can be a mixture of both BaaS and FaaS hosted on your cloud provider’s infrastructure.

Ultimately, with serverless, you only have to focus on developing and shipping the code. Development is easier, making client-server applications simpler because the cloud service provider does the heavy lifting.

Now that we better understand Multi-Tier and Serverless architectures let’s compare them.

Multi-Tier vs. Serverless

There are several critical areas to consider when comparing serverless architecture with multi-tier architecture.

  • Skill Set
  • Costs
  • Use Case

Each has varying degrees of impact depending on your goals.

Skill Set: 

Serverless:
You need only a development background to be successful with serverless. Your cloud provider will take care of the infrastructure complexity.

Multi-Tier:
To succeed in a multi-tier approach, you need an operational level of support expertise: You’ll configure servers, install operating systems and software, manage firewalls and develop all these things alongside the software. Depending on what you’re trying to achieve, having this skill set could be advantageous.

Costs:

Serverless:
When it comes to cost, there are arguments for both architectures.  Startup costs with serverless are really low because you only pay for every execution of your code.

Multi-Tier:
The opposite is true with multi-tier architecture. You’ll have upfront costs for servers and getting them set up in your data center or cloud. However, you’ll save money if you expect a steady traffic volume and you can leverage that cloud configuration. Because you will do the cloud configuration yourself, the cost may vary depending on your use case.

Use Cases:
Let’s look at how we expect to use the software.

Serverless: Serverless is fantastic for sporadic traffic or seasonal traffic. Perhaps you are looking at a retail website with large monthly sales (with huge traffic). With a traditional data center, your infrastructure is available even when you don’t need it, and that’s a significant spike in overall cost. With serverless, you don’t have to worry about the infrastructure. It will automatically scale.

Multi-Tier: Let’s say you have a consistent traffic pattern. You know exactly what you need. In this case, you might be able to save some money by sticking to the traditional approach of software architecture.

Conclusion

In closing, traditional DevOps culture is converging. We’ve moved from servers to virtual machines to containers. Now we are looking at literally just a few lines of code in a function that we have been shrinking away from maintaining a full-on infrastructure with our software. We have to isolate business logic from the infrastructure with serverless. And that is convenient for development’s sake because you don’t have to worry about taking care of the infrastructure as you develop your software.

Tags: ,,, Category: Community page, Containers Comments closed

Top 5 Reasons to Migrate SAP to the Cloud

Friday, 26 March, 2021

We’re sitting in March 2021, 1 year after COVID completely changed our lives, but also accelerated the move to Cloud for many enterprises and government agencies. Looking at a lot of industry surveys, many of you are no longer asking the question. The Cloud? It’s a done deal.

However, there are still those that are waiting. Change is hard, why upset essential SAP workloads that work just fine where they are—right? So this post is for you guys.

Why move SAP to the cloud? I think of it as a 2 part question. Part 1 is – why move to the cloud at all? So, many reasons. But the biggest 2? Agility and Cost.

  1. Agility: Way back in the ancient days of the early internet, companies invested in their business by putting in beefy, complex on-premise systems. Those projects took a long time, and developing applications on them sometimes took longer. That is not ever going to be a model we can operate in again. We live in a digital world, where everything moves at lightning speed, everything needs to be frictionless. Enterprises and government agencies are expected to pivot on a dime – changing business models, consumption channels based on shifts in the market. Experts say 25% of the opportunity for businesses out there over the next three years will come from white space. Places that do not exist today. Speed, agility – it’s essential to the survival of our respective organizations. And you can’t get there on traditional architectures, IT processes – the Cloud is a must for unleashing your innovation engine.

I have a great customer story that ties into this so well. Moderna is a household name in our post-pandemic year of 2021. We all know they won the vaccine race and were the 1st out the door with an effective COVID vaccine. But do you know how they did it? They did it thanks to the power of the Cloud. Moderna was able to deliver the first clinical batch of its vaccine candidate to the US National Institute of Health for phase one trials just 42 days after the initial sequencing of the virus. 42 days! That’s because their R&D apps were already in the cloud, prior to COVID hitting us last March, and their R&D teams were able to collaborate and pivot really really quickly. That’s agility for you.

  1. Cost: Granted, the cloud is not a panacea for cost. There are many hidden costs that you may not be aware of, and we have heard stories of customer getting surprised by their bill. But for the most part, companies now have the confidence to right-size their computing resources based on your unique requirements and cut wasteful spending. Don’t worry about correctly estimating capacity needs in advance, you can adjust on the fly and scale up easily if there is a sudden surge in demand. If you look at AWS case studies, customers routinely see TCO reduction of 20, 30 or even 50%.

So yes, there are very good reasons for moving to the cloud in general.

Now specifically for SAP we see 3 additional reasons to make the move.

  1. Staying on prem is getting prohibitively expensive – there is a new hardware barrier to entry. Back in the days, when you first deployed Netweaver, the hardware cost was a reasonable component of the total project budget. But in this day and age, if you want to have the right capacity for on-prem Netweaver or HANA deployment, we are talking major dollars. HANA is going to be even more expensive. SAP HANA DB needs significantly more hardware compared to some of the traditional DBs using in NetWeaver deployments. And, not to mention the headaches involved with procuring everything, especially with the COVID-ravaged supply chain for many of the hardware vendors. Think about all the steps you need to go through: invest upfront in CAPEX to buy servers; guess how much your SAP system may grow over the 3-5 year lifespan of the hardware; buy for the “high water mark”, even though you may not need that full capacity for the first 4 years. With the cloud, that’s not a worry, all of that is abstracted away, and you get to start today, with zero upfront cost.
  1. There is a magic date of 2027 hovering over the horizon for all SAP practitioners. Yes we know SAP has already moved that date/deadline, and they may yet move it again. But regardless, the day of “reckoning” is coming, sooner rather than later. With SAP recommending customers standardize on Linux OS and transition to SAP HANA or S/4HANA on the SAP HANA database, you can no longer put off reconfiguring your applications—cloud or no cloud. Now is a good time to fully think through your long term strategy, not just for SAP, but also hardware, facilities. Do you really want to invest more in the on-prem model when it’s getting harder and harder to justify the money and time you spend maintaining datacenters and your own hardware—especially since they can’t provide the same scaling, compute, or high availability/disaster recovery (HA/DR) capabilities for the same price you could get in the cloud. And don’t forget some of the other stuff beyond servers, power, cooling, maintenance, staff, management. If change is already in the works, make this change the most rational one you can achieve and maximize multiple objectives. The simplicity and convenience of the cloud is hard to beat.
  1. Yes I know everyone says transformation, it seems very cliché to even touch the word in 2021. But, with the newer SAP solutions, there is real transformation. We are talking about advanced analytics, intelligent, real-time applications, harnessing the power of AI/ML… the list goes on. Yes you can move to HANA on-prem, but the cloud gives you a cleaner migration and so many more possibilities. Did you know AWS has over 175 AWS native services? Many of them can tie into your SAP landscape to yield new insights and open new horizons. It’s what AWS calls “moving beyond the infrastructure” – extending your SAP solution and providing more interesting/intelligent/predicative capabilities. A common use case is using S3 to create data lakes for your big data analysis.

So there you have it, my top 5 reasons for moving to the cloud – for SAP users. TLDR? No worries, I got something for you too! If you’d rather watch a quick 5 min video, here’s my video take of this same blog, 🙂

Top 5 Reasons Why to Move SAP to the Cloud – Why Cloud + Why SAP to the Cloud – YouTube

Hopefully you like the video, but even if you don’t, there is a lot more to watch on our new SUSE + AWS Playlist: https://bit.ly/SUSEAWS

We have expert interviews, tips and tricks, demo videos, fun animations, fun events, with lots more to come. So check us out, like and subscribe!

Thank you for reading my humble blog. Have a wonderful rest of 2021!

 

How SUSE & Partners Are Accelerating IT Transformation During The Pandemic

Thursday, 25 March, 2021

The way we work has changed dramatically over the last 12 months. What’s more, those challenges are likely to stay for much of 2021. But there are organizations who are using this opportunity to double down on their digital transformation.  

I asked two of our Platinum partners – Mark de Groot, CEO of BPSOLUTIONS, based in the Netherlands, and Nicolas Christener, CEO & CTO, Adfinis, based in Switzerland, about how we can and are helping our customers be more agile and innovative in the face of difficult marketing operating conditions.   

Rachel: The pandemic means we are facing some unprecedented conditions. How is it affecting you and your customers? 

Nicolas: As a digital company, Adfinis was already ready for the “home-office life.” We had all the tools in place for remote working and our employees are very well self-organized and agile. Online coffee-breaks and even some occasional beer-nights help to keep the different teams connected. Even though we had to solve a few challenges in the beginning when working with our customers, we’ve been able to mirror our existing processes and ways of working in digital environment. 

Many of our customers are very far along in their remote working journey as well, and it became clear that working together using collaboration tools is totally doable and the way we’ll probably work in the future. 

Mark: This pandemic is going on for much longer than anyone at first thought. The government support is still masking much of the real financial damage, but I expect CIOs to suffer from this post-pandemic financial stress for the next two years.  

This means choices have to be made. What we see is that IT agility is getting more and more of a key priority and forcing organizations to migrate more quickly to the cloud. We were already seeing this trend, but it’s now accelerating. 

Rachel: Are there any areas where SUSE has enabled you or your customers to do more? 

Mark: Migrating to the cloud means on the one hand that applications must be modernized and, on the other hand, the infrastructure must be adapted accordingly.  

One of the areas is the growth in the so-called Cloud Native applications that often use container technology such as Kubernetes. Kubernetes helps boost productivityreduce costs and risksand moves organizations closer to achieving their hybrid cloud goals.  

It also significantly increases the agility and efficiency of their software development teams, enabling them to reduce the time and complexity associated with putting differentiated applications into production.  

The acquisition of Rancher Labs by SUSE is something that will help our customers manage Kubernetes platforms well and keep their environments always secure and available 

In addition, BPSOLUTIONS has deep expertise with regard to SAP HANA infrastructures. SAP users need to migrate to S4/HANA to utilize new innovations that SAP have launched, in their business processes. We are very successful in this together with SUSE. 

Overall, we see growth in all open source solutions because of the innovation that open source brings. The open source movement is the reason that technology has developed at such a tremendous pace for the past few decades. 

Nicolas: We have been partnering with SUSE for quite a few years now and enjoy pleasant and fruitful cooperation with the SUSE team from marketing to partner managers and engineers. In the dreary everyday life during this pandemic, such moments are always welcome. 

We are also very excited about the Rancher Labs acquisition and can already see how it can help our customers manage their Kubernetes environment so that they can deliver applications more quickly and securely.  

Rachel: What kinds of challenges are your customers facing in 2021 and how do you think we help them address their challenges together? 

Nicolas:Many companies are increasingly becoming software companies – launching applications that deliver better customer experiences 

Many companies are doing this organically rather than having a process in placeThis means fundamental things are often forgotten, such as collaboration between development and operations, security issues or infrastructure environments that do not correspond to the new structure.  

That’s where SUSE and Adfinis can help with our years of experience, the right resources and tools. We take companies by the hand and guide them through the cloud journey and make them ready – from container environments to cloud platforms to DevOps workshops – so they can focus on their core business again. 

Mark: We live in an always on world that operates 24/7. A world that is getting smarter and more innovative every day. Success in the future is not about what data you have available, it is about access to those networks where to find this data, about how to use the data and about knowing what you can do with the data. It’s all about knowing how to make data work for you. 

We help organizations organize their Mission Critical IT in a way that makes the organization smarter so it can make progress. Companies can literally stop functioning if something goes wrong with their data! This makes Mission Critical IT perhaps the most valuable asset of a company. Something that must be guaranteed and that must be fully utilized.   

SUSE and BPSOLUTIONS both stand for innovation and we value what open source technology can offer. And together we make the world a little smarter. 

Rachel: Thank you Mark and Nicolas for sharing your thoughts with us. We and our joint customers value your expertise in solving the challenges that businesses face today!   

We work with partners like BPSOLUTIONS and Adfinis all over the world who can help you transform your IT infrastructure so you can focus on delivering the innovation that your business needs. Find a partner near you. 

Cloud Computing in 2021: What You Should Know about Public, Private, Hybrid, PaaS, SaaS and FaaS

Monday, 15 March, 2021

Whether you’re focusing on cutting maintenance, electricity and storage costs, increasing reliability or doing your part to reduce climate impact, there are countless reasons organizations are looking to escalate their cloud migration as fast as they can. Cloud computing is probably the most significant driver of digital transformation over the last decade.

What are the Benefits of Cloud Computing?

So, we all start with the same baseline knowledge, just what is cloud computing? Simply put, it’s on-demand computer system resources — anything from data storage to compute power. Over the last 15 years, we’ve seen a growing trend of organizations moving from building, securing and maintaining their own on-premise data centers — at an exorbitant cost — to “time-sharing” data centers via the internet.

In a nutshell, cloud computing is a function. The “cloud” is an esoteric name for the environments where applications and workloads run. And its purpose is to enable developers to build and run applications in a way that they can deliver value faster.

Did you know that 90 percent of all data has been created in the last two years? More data means more storage. And certainly, always-on consumers demand “five-nines” or 99.999 percent uptime. Outsourcing the management of your data storage and processing dramatically increases your reliability at a lower cost.

But the benefits of cloud computing don’t just stop there.

Cloud computing is flexible, allowing companies to scale up and down in response to demand. This means you pay as you go, controlling not only uptime but costs. And because there are more than 800 cloud providers across dozens of regions, you gain even more reliability. Let’s say there’s a power failure in one region. Your cloud instance will failover to another region if you pay for multi-region support. Here’s the alternative: imagine if all your servers were located in one region or zone that you control and there was a natural disaster or human-made security breach. This could take down even the most established businesses.

Moreover, according to Salesforce, just by switching to the cloud, 94 percent of businesses saw a security improvement, while 91 percent said it made for easier governance and compliance. After all, it’s safer and easier to manage access control remotely instead of issuing snatchable hardware.

There are also many business benefits to cloud computing. The most obvious is CAPEX cost savings. With the cloud, businesses see anywhere between 30 and 50 percent savings. After all, you’re sharing the cost of the data center, staff, electricity, maintenance and more with many other organizations.

Cloud computing also is proven to increase collaboration across silos and devices and with external partners to release features faster and stay more competitive. In a year like 2020, with a sudden move to a mostly remote-first world, the cloud was essential not just for collaboration but keeping businesses up and running.

Plus, data centers aren’t just about the hardware — they are mostly about the data. By having an external cloud provider — or multiple — managing your data storage and computing, you also leverage their cloud analytics to gain faster access to data-backed insights and actions.

Finally, no matter what, things break, even when everything is under your control. Cloud computing delivers faster recovery times and multi-site availability at a fraction of the cost of conventional disaster recovery.

For all these reasons, spending on public cloud IT infrastructure surpassed spending on traditional IT infrastructure for the first time in 2020, according to the International Data Corporation.

With all this in mind, here’s our guide to cloud computing, to help give your organization that edge — perhaps literally — in 2021.

Becoming Cloud Native in 2021

What is cloud native other than a buzzword? It’s the mentality, methodology and technology that exploits all these benefits of cloud computing. Some of the most successful companies nowadays were barely heard of a decade ago — Netflix, Uber, WeChat, Pinterest. Each of these has leveraged the cloud from the start to create distributed, flexible, scalable, insight-driven, agile organizations that deploy hundreds of times a day.

Other very agile companies, like Spotify and GE, while having begun with monolithic architectures, have moved increasingly cloud native.

Some companies have moved back and forth over the last decade or have chosen the public cloud for any greenfield applications, but then will go either private cloud or old-school, on-premise for legacy architecture that they are keeping.

Cloud native comes with both a technical focus — cue in more buzzwords like microservices and Kubernetes — and a cultural one — emphasizing a great degree of autonomy. Following Conway’s Law, decentralized cloud-native architecture is usually a reflection of a decentralized workflow that’s committed to continuous delivery and product ownership.

SUSE CTO of Enterprise Cloud Products Rob Knight says cloud native is “really is a paradigm shift in not only how we develop applications and workloads, but how do businesses function in their entirety.”

Rob Knight, CTO of Enterprise Cloud Products, SUSE

 

The trend in continuous delivery sees IT becoming the core of all businesses.

Knight continues, “What is cloud computing? It’s about abstracting and pulling infrastructure and hardware and presenting that in an easier-to-use way.”

For Knight, a cloud native world not only puts digital at the center of any business transformation but allows more and more people inside and outside of IT to take part in that discussion. By bringing the traditional bottlenecks of compliance and security to the table earlier on, cloud becomes an easier conduit for both safer systems and smoother collaboration.

What is the Public Cloud?

While analysts expected the public cloud to do very well in 2020, its ability to quickly scale up to support businesses saw it skyrocket beyond all expectations. Not surprisingly, this was especially true in the second quarter, when everything from conferences to weddings to school shifted online.

Public cloud has a slightly unnerving name, but it doesn’t mean shared data. It means a third-party provider — predominantly Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP) — offers cloud services, on-demand, either over the public internet or a dedicated network.

Usually, it’s done with a single tenancy, which means a single instance like a server, available per customer, which brings more security reassurances. However, because it saves on financial and environmental impact, multi-tenant public cloud computing is increasingly used for staging, testing and development. It’s also popular for massive machine learning and quantum computing workloads.

The public cloud is popular with distributed international companies because the leading cloud providers can guarantee your uptime across the globe. Since it’s pay-as-you-go, the public cloud responds well to unpredictable customer usage and scalability. Because you’re pooling resources in the public cloud, many see it as a cheaper option than others that we’ll address below. In theory, you only pay for what you use, but the main cloud providers also offer their excess compute at a discount. The risk of the public cloud and especially of using this on-sale excess compute is that it could run out.

You could also find yourself locked into one vendor without a key.

One way around this is to use a Kubernetes management platform like Rancher to offer you more control over your public cloud and the opportunity to go multi-vendor. The goal is to make sure you can work with any cloud provider so you can move when you want to.

What is a Private Cloud?

Last spring’s growth wasn’t just in the public cloud. A slower but significant year-over-year growth saw $5 billion spent on enterprise on-premise private cloud.

Sometimes called an internal cloud or a corporate cloud, these are private cloud environments dedicated to one user, usually within their firewall. Private cloud can be managed on-premise, but nowadays, like the public cloud, it is usually run at vendor-operated data centers, which offer a private cloud for a single customer with isolated access. The big names here are often associated with traditional IT infrastructure and hardware, including Dell, Hewlett Packard Enterprise, VMware, Oracle, Cisco, IBM and Microsoft.

The private cloud still brings the competitive advantage of cloud computing, but it comes with some drawbacks.

Private cloud definitely plays a valuable role in many cloud strategies. Unless you are leveraging synthetic data or data masking, data protection laws make it challenging to put personally identifiable information onto a public cloud. And the private cloud certainly makes sense if you have a stable demand and predictable scalability. There are also sensible use cases, like at a factory where you may have a specific locational focus. Of course, you may want to then share the insights from that location via the public cloud, making a hybrid or multi-cloud model more attractive.

Understanding Hybrid Cloud Versus Multi-Cloud

Just like the car, the hybrid cloud is an increasingly popular option because it brings the best of both private and public cloud. It allows sensitive customer data to remain on your private cloud but enables autoscaling to the public cloud if there’s a sudden traffic spike. This is called cloud bursting.

Here private clouds connect to the public cloud, usually with multiple service providers. This avoids vendor lock-in, controlling cost and maintaining flexibility.

Hybrid is still a growing trend, so there’s no one way to run it. Some companies will run the front-end of their websites and apps in the public cloud through a firewall. Other orgs want their resilience on-premise in case they ever want to move workloads, in case something goes down in the public cloud or they just want to change providers.

Leveraging the hybrid cloud allows for a broader set of skills on a DevOps team. You can employ some teammates that really can deep-dive into the niches of maintaining infrastructure, while others can just deploy to the public cloud without having to have experience with a specific provider.

There’s a similar but nuanced option of the multi-cloud. Like hybrid, multi-cloud allows for different providers but has a mix of on-premise or private cloud.

Multi-cloud is common in retail and e-commerce when they need massive scalability around the holidays but don’t want their customer database in the public cloud. U.S. retail giant Walmart does what they refer to as a “hybrid multi-cloud fog approach.” They have a private cloud at each physical store and manage some workloads in the public cloud backed by multiple providers.

Which Cloud Strategy is Better?

Cloud strategy is much more of an operations, security and fiduciary concern. In the end, your developers aren’t so fussed about what type of cloud they are working with as long as they have uptime. Devs just need frameworks in place to get coding and a place to push their apps to. Irrespective of cloud strategy, your development team just needs the assurance that the platform team can provide a platform that abstracts the infrastructure and pushes it to the cloud.

So no cloud strategy is better than another. That’s why organizations are increasingly choosing hybrid and other flexible approaches as they transition from traditional architecture and experiment with what works best for their teams.

Who Manages What: A Look at SaaS, PaaS, Faas, IaaS and FaaS

Cloud computing includes four types of services that can run on public, private or hybrid cloud: SaaS, FaaS, PaaS and IaaS. The difference among these comes down to who is responsible for what — the organizations that own the data or the cloud providers that store it. And this depends on the available talent, cash and desire to own your own.

Software as a Service (SaaS)

SaaS is often referred to just as business software. Salesforce is the old standard in this grouping, but Zoom, Dropbox, Office 365 and the Google App Suite are just as omnipresent from startups through enterprises. It’s a fully managed experience in the cloud, sitting at the top of your software stack. You just use the tools; they run every aspect of it.

Platform as a Service (PaaS)

With PaaS, the cloud customer manages applications and middleware, while the cloud provider handles the virtualization, data, networking, runtime, services, and storage. PaaS offers a complete web development environment and is typically hardware agnostic.

Perhaps most importantly, PaaS abstracts and automates out some of the significant complexity that comes with Kubernetes. In this way, it also cuts out the cost of maintaining all of the above, including needing to recruit highly specialized DevOps architect talent.

Teams still get a sense of control, while the cloud provider is responsible for security automation and autoscaling. PaaS takes away these administrative worries so teams can just focus on delivering that business value.

Infrastructure as a Service (IaaS)

IaaS and public cloud are rather synonymous. This is when an organization still maintains control of most of its build but leaves the servers, uptime and data storage to the cloud providers.

Function as a Service (FaaS) or Serverless Compute 

Netflix, while managing hundreds of services in an agile and seemingly autonomous fashion, famously offers guardrails. Context, not control. Because with increasing autonomy comes increasing complexity — of systems and people. Netflix leadership provides a pathway and guidance while enabling the optimal amount of trust in its developers.

Essentially, over the last 20 years, the pendulum has swung so far from Waterfall quarterly releases on legacy architecture all the way left to individuals releasing multiple times a day. We are now witnessing a trending swing toward a middle ground.

A function is a bit like a miniservice or even macroservice. These small units of code are designed to do only one thing, usually acting on an event. A bit smaller than a nanoservice, microservice or container, they are the smallest unit of execution in wide use today.

FaaS or serverless comes with an understanding that you need flexibility, but also, if it ain’t broke, don’t fix it. A company like new bank Monzo has thousands of functions and microservices in the cloud — choosing to use the best approach for each use case. Sometimes you’ll want to abstract that underlying infrastructure in a PaaS model, but other times you will still want access to the base infrastructure.

FaaS is built on event-driven architecture — if x, then y. Event-driven architecture is part of the broader industry trend of separating components and streamlining processes so that releases are faster and organized around end-user activity. It’s all about chaining together functions or services to better serve business needs by publishing, listening to and reacting to events.

In a FaaS setup, you upload your function code and attach an event source to it. Then the cloud provider ensures that your functions will always be available. It autoscales with stateless, ephemeral functions.

Serverless typically comes with the lowest runtime costs and, since it shuts down servers when not in use, it’s also the best for the environment. It allows for speedy development and rapid autoscaling.

It’s still pay-as-you-go along with the other general cloud computing benefits, and it allows for clear code-base separation. The downside is that unless you use an abstraction tool like Rancher, you can get trapped into vendor lock-in again.

Where is the Cloud Headed in 2021?

Knight predicts there will be a consolidation of developer tooling this year. FaaS and serverless will fulfill this need to balance flexibility with simplicity and take the next step to bring developers and operations together while not tying down teams to a specific cloud provider.

Teams, like clouds, will embrace hybrid collaborative approaches that bring all stakeholders to the table earlier on. And security will become an essential part of every step.

Here’s to a great year in cloud computing and digital transformation!

 

 

 

Jennifer uses storytelling, writing, marketing, podcast hosting, public speaking and branding to bridge the gaps across tech, business and culture. Because if we’re building the future, we need to think more about that future we’re building.

NGINX Guest Blog: NGINX Kubernetes Ingress Controller

Thursday, 11 March, 2021
Guest blog by Dylen Turnbull, Solution Architect at NGINX (F5)

REGISTER FOR OUR UPCOMING WEBINAR (3/20/21) – NGINX & Rancher – Simplifying, Securing, and Scaling Your Kubernetes Deployments

Now available through the Rancher Apps and Marketplace

You probably know by now that Kubernetes is a powerful platform – but it needs other tools to make it even better. Ingress Controllers fall into that category, and if you’ve been using Kubernetes, you’re probably quite familiar with them. But here’s a quick refresher on Ingress and Ingress controllers.

By design, Kubernetes pods can be accessed only by other pods within the cluster – not from the external network. Ingress is Kubernetes’ built‑in configuration for HTTP load balancing that defines rules for external connectivity. When you need to provide external access to your Kubernetes services, you create an Ingress resource that defines rules, including the URI path, backing service name, and other information. Then you use an Ingress controller to automatically program a front‑end load balancer to enable Ingress configuration.

NGINX Ingress Controller from NGINX (now part of F5) provides enterprise-grade delivery services for Kubernetes applications. In this blog, we’ll explore the integration of NGINX Ingress Controller with the Rancher Apps and Marketplace. But before we jump into the blog, let’s talk about which NGINX Ingress Controller you may be using.

There are two popular Kubernetes Ingress controllers that use NGINX – both are open source and hosted on GitHub. One is maintained by the Kubernetes open source community (kubernetes/ingress-nginx on GitHub) and one is maintained by NGINX, Inc. (nginxinc/kubernetes-ingress on GitHub).

What Makes NGINX’s Ingress Controller Different?

Here’s how the goals of NGINX’s Ingress controller differ from the community’s Ingress controller, straight from NGINX’s VP of Product Management, Sidney Rabsatt:

  • Development philosophy – NGINX’s top priority for our Ingress controller is to deliver long‑term stability and consistency. We make every possible effort to avoid changes in behavior between releases, particularly any that break backward compatibility. We promise you won’t see any unexpected surprises when you upgrade.
  • Continual production readiness – NGINX provides commercial support for every release of our Ingress controller, so every release is built and maintained to a supportable, production standard. You benefit from this “enterprise‑grade” focus equally whether you’re using NGINX Open Source or NGINX Plus.
  • Integrated codebase – NGINX’s Ingress controller uses a 100% pure NGINX or NGINX Plus instance for load balancing, applying best‑practice configuration using native NGINX capabilities alone. It does not rely on any third‑party modules or Lua code that have not benefited from our interoperability testing. Furthermore, the community’s Ingress controller relies on slower Lua code for some functionality native to NGINX Plus.
  • Security – We don’t assemble our Ingress controller from lots of third‑party repos; we develop and maintain the load balancer (NGINX and NGINX Plus) and Ingress controller software (a Go application) ourselves. We are the single authority for all components of our Ingress controller.
  • Support – NGINX’s Ingress controller is fully supported for NGINX Plus customers and users of NGINX Open Source who have a paid support contract.

And while we’re here, let’s review some of the key benefits you get from NGINX Plus when using it with NGINX’s Ingress controller:

  • Additional capabilities – Real‑time metrics, additional load‑balancing methods, session persistence, active health checks, JWT validation
  • Dynamic reconfiguration – Faster, non‑disruptive reconfiguration ensures you can deliver applications with consistent performance and resource usage
  • Commercial support – It’s like having an NGINX developer on your DevOps team!

Of course, NGINX and NGINX Plus can be deployed on any platform including bare metal, containers, VMs, and public, private, and hybrid clouds.

Now that we’ve covered the differences of the NGINX Ingress Controllers, let’s dive in.

NGINX and the Rancher Apps and Marketplace

In partnership with Rancher Labs, NGINX has added the NGINX Ingress Controller to the Rancher Apps and Marketplace. We have provided a drop-in solution in the form of a Rancher Chart that leverages the official open-source version of NGINX. In addition, the Apps and Marketplace provides a simple upgrade path to the fully supported version of NGINX Plus with extended functionality.

Let’s walk through setting up both versions.

Once you’ve fulfilled some minor prerequisites, you set a couple of configuration options via the Rancher Chart UI to deploy the NGINX Open Source or NGINX Plus version to any Rancher-managed cluster as either a NodePort or a DaemonSet.

 

Deploying the NGINX Plus version gives you access to a number of advanced features, which we’ll explore in the next section. It also includes  NGINX App Protect, for an enterprise‑grade Ingress controller with a web application firewall (WAF) that sits inside the Kubernetes cluster.

So, why use the NGINX Ingress Controller for Kubernetes?

Both the NGINX Open Source and NGINX Plus versions provide SSL/TLS termination, WebSocket, URL rewrites, HTTP/2, Prometheus exporter. and Helm charts. The NGINX Plus version also includes:

  • Reduced complexity
  • Advanced load balancing
  • Observability
  • Security
  • Self-service and multi-tenancy
  • Production readiness

See here for more information

Integration with NGINX App Protect

As we said earlier, the NGINX Plus version is now fully integrated with NGINX App Protect. It is the only supported WAF that sits inside the Kubernetes cluster along with the application pods it protects from malicious attacks.

Why Is Integrating the WAF into the Ingress Controller So Significant?

Integrating the WAF into the Ingress Controller brings three unique benefits to both administrators and app developers:

·        Securing the application perimeter

·        Consolidating the data plane

·        Consolidating the control plane – having fewer security tools to manage increases efficiency and reduces possible points of failure

Developers can also incorporate WAF functionality into their workflows, without having to ask other teams to grant permissions. This creates efficiencies and supports compliance with security requirements.

For more information, visit: nginx.com/products/nginx/nginx-ingress-controller

Bio: Dylen Turnbull – Solution Architect

Dylen Turnbull (@Dylen_Turnbull) / Twitter

https://www.linkedin.com/in/dylen-turnbull/

Throughout his career, Dylen Turnbull has worked for several companies Symantec, Veritas, F5 Networks, and now

F5’s NGINX business unit. This time represents an accumulation of over 22 years of enterprise / open

source software and solution development experience. Working with NGINX Business Development on strategic partner alliances with Rancher and Grafana Labs, his primary focus has been integration work with open-source technologies including Rancher, Rancher Kubernetes Engine, K3s, Prometheus, and Grafana in the containerization, virtualization, and continuous integration/delivery solutions space.