Happy birthday, Linux!

Thursday, 25 August, 2022

There is a natural connection. Positive vibes probably, or the zodiac sign that “brought us together”? Yes – Linux and I have been born on the same day – August 25 (well – not exactly in the same year however …). Thinking about the “double feature birthday” – and the upcoming 30th (!!!) birthday of our company SUSE on September 2nd – brought back a flood of memories and some stories that influenced my career. So – bear with me and let me share a smidgen :-).

Linux to the edge (and beyond)

I started to work quite late with Linux, in 1998 – even if I knew about it earlier. My “first contact” dates back to 1994. I was teaching Mass Media at the University of Erlangen. Our server infrastructure was administered by a student, who was a real Linux geek, and he is to blame for my interest in Linux and also in High Performance Computing (my first one big love when working in Product Marketing). He told me about Open Source, the Community, and Linux. And he made me aware of the first Beowulf Cluster that was built at NASA, using Linux, as an alternative to the very expensive HPC supercomputers.

Example of a Beowulf cluster

S.u.S.E. Linux 1.0 – back in 1994

It was back in these days when Linux started its success in the High Performance Computing area. Today it is THE operating system for HPC.

The reason I mention this – and what especially strikes me here – is the following: Linux has always been OPEN and also open to bleeding edge technology that has paid off as Linux has steadily incorporated HPC features over the years. In order to gain performance, HPC systems running on Linux have been spearheading the industry with regards to the deployment of important architectures, such as e.g. the 64bit processors (Intel Itanium2 at time being, or AMD Opteron where SUSE was heavily involved in the technical development), or technologies like Infiniband. Especially managing complexity, providing power and cooling, and ensuring efficient application scaling and hardware utilization were the challenges that HPC computing needed to solve. Thus the HPC market has always been the vanguard of strange or fancy new computing technologies. Artificial Intelligence, machine learning, and much more – during the past decades, the Linux on HPC market was where vendors tested out the ideas that drove future commercial products. Not a lot of people is aware of it, or “reflects” about it, but Linux in combination with HPC very often was initiating, pushing, enforcing important technical innovations we all benefit from today.

Linux goes enterprise

Not too long ago, a colleague asked me: “When you started working in Linux, did you ever expect for it to be where it is today, enterprise-class and used by big organizations?” When I actually started to work with Linux myself, it was already 7 years old. I was employed by a company that wrote software – specifically tailored telebanking and telecommunication applications – for a very limited number of customers (BIG German companies …).

And we developed those applications on Linux. One of the reasons for it was that these companies wanted to get the sources for the programs we developed, and the documentation, so that worst case they could further develop the software or change it and adapt it to their needs. The other reason was security. 15.000 eyes of a worldwide developing community can see much more than just a group of in-house developers – that was their credo and motivation.

At time being I wasn’t aware of it, but these companies were quite ahead of time, because they had understood some of the key advantage of Linux at a quite early stage.  Thus, when I started to work with Linux, funnily enough for me it was quite normal that bigger enterprises were using it as a viable alternative for enterprise computing.

I just realized that many people did see this differently when I started my new life at SUSE in 2000. So many discussions popped up if Linux was enterprise-ready, if it could deliver the high availability, security and performance that enterprise customers expected to get from their operating system, etc … What really stroke me was that many data center managers I spoke to were convinced that they had NO Linux in their data center. But after having made their inventory, they all had to admit that somewhere they were running (a) Linux-based server(s) – and they just didn´t know about it.

In consequence, (one of) the most important milestone(s) for me – and I assume for Linux in general – was when we invented Enterprise Linux back in 1999/2000. This was when IBM, Marist College and SUSE began working on a version of Linux for the mainframe (the other “big love” I worked on during my time in Product Marketing).

Linux for the mainframe involved creating new processes and infrastructure at SUSE, a new business model, the need for 24×7 worldwide support, synchronized IBM/SUSE Level 3 support processes, ISV and IHV certification, etc… Finally, in October 2000, SUSE Linux Enterprise Server for s/390, the first true enterprise-class and cross-platform Linux offering, was officially released.

The first SUSE Linux Enterprise Server flyer

And all other Linux enterprise distributions somehow adopted the new enterprise-class business model. Today, on basis of this new business concept, SUSE has expanded its ecosystem to many thousands customers and partners worldwide, and employs more than 2000 people.

Linux breaks barriers

Another milestone in the history of Linux for me – and from a SUSE point of view – was the Microsoft – SUSE /Linux agreement in 2006, kind of a real historic moment. My very first emotional reaction was – I assume as for many Linux enthusiasts – all but positive … Microsoft was the big enemy, the “dark side of the Force”. And we were accused of having sold our souls to the evil! But then I realized that there was a very valid motivation that led to this cooperation. The motivation were our customers. Both companies had understood very well that no customer´s world is homogeneous, that almost all of them are dealing with heterogeneous data centers (… well – even if some higher level manager did not know about it …), where they run open source AND proprietary software. One of my colleagues reminded us of a famous quote from Mahatma Ghandi:

“First they ignore you, then they laugh at you, then they fight you, then you win.”

And indeed, looking under the surface, this partnership or cooperation with Microsoft meant that Linux had won. All together as a community we had provided the evidence that “Big Brother” could no longer ignore Linux, because Linux had become an indispensable component of the IT industry, and that it had conquered and defended successfully its claim to be an enterprise-class operating system. In retrospective, this was also the ultimate proof for me to be sure that Linux would not vanish anymore.

 

So, what do these stories of the earlier years of Linux teach us about the years to come? In my personal opinion they tell us: Stay innovative, dare to risk, don´t let a “this is not possible” hold you up. But at the same time, don´t forget to cultivate and grow your original Linux roots.

Disclaimer: This text at hand has not been reviewed by a native speaker. If you find typos, please send them to me (meike.chabowski@suse.com) – or if you like them, just keep them :-).

 

How to Explain Zero Trust to Your Tech Leadership: Gartner Report

Wednesday, 24 August, 2022

Does it seem like everyone’s talking about Zero Trust? Maybe you know everything there is to know about Zero Trust, especially Zero Trust for container security. But if your Zero Trust initiatives are being met with brick walls or blank stares, maybe you need some help from Gartner®. And they’ve got just the thing to help you explain the value of Zero Trust to your leadership; It’s called Quick Answer: How to Explain Zero Trust to Technology Executives.

So What is Zero Trust?

According to authors Charlie Winckless and Neil MacDonald from Gartner, “Zero Trust is a misnomer; it does not mean ‘no trust’ but zero implicit trust and use of risk-appropriate, explicit trust. To obtain funding and support for Zero Trust initiatives, security and risk management leaders must be able to explain the benefits to their technical executive leaders.”

Explaining Zero Trust to Technology Executives

This Quick Answer starts by introducing the concept of Zero Trust so that you can do the same.  According to the authors, “Zero Trust is a mindset (or paradigm) that defines key security initiatives. A Zero Trust mindset extends beyond networking and can be applied to multiple aspects of enterprise systems. It is not solely purchased as a product or set of products.” Furthermore,

”Zero Trust involves systematically removing implicit trust in IT infrastructures.”

The report also helps you explain the business value of Zero Trust to your leadership. For example, “Zero trust forms a guiding principle for security architectures that improve security posture and increase cyber-resiliency,” write Winckless and MacDonald.

Next Steps to Learn about Zero Trust Container Security

Get this report and learn more about Zero Trust, how it can bring greater security to your container infrastructure and how you can explain the need for Zero Trust to your leadership team.

For even more on Zero Trust, read our new book, Zero Trust Container Security for Dummies.

Bank Gospodarstwa Krajowego Bets on Application Containerization with the Help of SUSE Rancher

Monday, 15 August, 2022

Bank Gospodarstwa Krajowego is betting on the implementation and use of containerized business applications in accordance with Cloud Native Computing Foundation (CNCF) standards. In cooperation with Linux Polska Sp. z o.o., among others, the bank implemented in production the SUSE Rancher containerization platform, an intuitive tool that facilitates the management of multiple Kubernetes clusters. The solution will significantly speed up the process of developing and deploying business-critical applications in line with DevOps and CI/CD best practices.

(The following text with statements from representatives of the parties involved comes from a communiqué prepared by Linux Polska in cooperation with SUSE Polska.)

Why should you pay attention to the SUSE Rancher platform?

SUSE Rancher is an ideal solution for organizations that have multiple application clusters. It allows you to centrally manage clusters in any environment (bare metal, virtual, hybrid cloud), using a unified interface. SUSE Rancher works particularly well for organizations with heterogeneous container environments that require standardization.

“We are pleased to be part of the IT transformation implemented by a major government bank and its professional team of specialists,” says Dariusz Świąder, CEO of Linux Polska. “The implementation of a new containerization platform for banking applications, based on SUSE Rancher, is a remarkable event for us, illustrating the power of open source solutions and the openness of institutions (such as the bank) to modern solutions. As independent consultants, we conclude that BGK made the optimal choice, not only in terms of functional requirements, but also multi-year TCO. SUSE Rancher is one of the containerization platforms for which Linux Polska, in recent years, has decided to build a professional engineering team to support implementations.”

 

“BGK is currently undergoing a kind of revolution that covers every aspect of its operations,” says Sebastian Jaworski, director of IT Systems Development at BGK. “We are in the process of implementing projects for the Central Banking System, the Treasury BackOffice system, the electronic banking system and the BPM system with extensive rebuilding in the area of supporting software and infrastructure. Given our limited human resources for maintenance and management, one of our main goals is to standardize solutions and maximize automation; the Rancher platform is one of those elements that fit perfectly into our IT strategy.”

 

Maciej Stalewski, director of IT Services Management at BGK, says, “Implementing such a large-scale change across BGK requires providing the right facilities and tools. Digital transformation is already a process that is evident in BGK’s DNA. We intend to maintain this momentum in the coming years and be technologically ready for the dynamically changing reality and needs.”

 

Marcin Madey, president of SUSE Poland, states, “In today’s world, every financial institution is a technology company. Entities such as BGK, therefore, need to respond quickly and effectively to the needs of implementing innovative digital services, shorten the time of implementation and ensure an appropriate level of security for users, all while keeping cost efficiency in mind. Nowadays, technology innovation and the ease of its application is a key issue to take on the challenges brought by digitization. The use of SUSE Rancher software has made it possible to build a cloud-in-a-box solution inside BGK, which quickly and efficiently allows the creation of IT ecosystems based on microservices and introduces a high level of process automation in operating the platform. As a result, BGK has entered the ranks of the leaders in digital transformation.”

 

Cloud Modernization Best Practices

Monday, 8 August, 2022

Cloud services have revolutionized the technical industry, and services and tools of all kinds have been created to help organizations migrate to the cloud and become more scalable in the process. This migration is often referred to as cloud modernization.

To successfully implement cloud modernization, you must adapt your existing processes for future feature releases. This could mean adjusting your continuous integration, continuous delivery (CI/CD) pipeline and its technical implementations, updating or redesigning your release approval process (eg from manual approvals to automated approvals), or making other changes to your software development lifecycle.

In this article, you’ll learn some best practices and tips for successfully modernizing your cloud deployments.

Best practices for cloud modernization

The following are a few best practices that you should consider when modernizing your cloud deployments.

Split your app into microservices where possible

Most existing applications deployed on-premises were developed and deployed with a monolithic architecture in mind. In this context, monolithic architecture means that the application is single-tiered and has no modularity. This makes it hard to bring new versions into a production environment because any change in the code can influence every part of the application. Often, this leads to a lot of additional and, at times, manual testing.

Monolithic applications often do not scale horizontally and can cause various problems, including complex development, tight coupling, slow application starts due to application size, and reduced reliability.

To address the challenges that a monolithic architecture presents, you should consider splitting your monolith into microservices. This means that your application is split into different, loosely coupled services that each serve a single purpose.

All of these services are independent solutions, but they are meant to work together to contribute to a larger system at scale. This increases reliability as one failing service does not take down the whole application with it. Also, you now get the freedom to scale each component of your application without affecting other components. On the development side, since each component is independent, you can split the development of your app among your team and work on multiple components parallelly to ensure faster delivery.

For example, the Lyft engineering team managed to quickly grow from a handful of different services to hundreds of services while keeping their developer productivity up. As part of this process, they included automated acceptance testing as part of their pipeline to production.

Isolate apps away from the underlying infrastructure

Engineers built scripts or pieces of code agnostic to the infrastructure they were deployed on in many older applications and workloads. This means they wrote scripts that referenced specific folders or required predefined libraries to be available in the environment in which the scripts were executed. Often, this was due to required configurations on the hardware infrastructure or the operating system or due to dependency on certain packages that were required by the application.

Most cloud providers refer to this as a shared responsibility model. In this model, the cloud provider or service provider takes responsibility for the parts of the services being used, and the service user takes responsibility for protecting and securing the data for any services or infrastructure they use. The interaction between the services or applications deployed on the infrastructure is well-defined through APIs or integration points. This means that the more you move away from managing and relying on the underlying infrastructure, the easier it becomes for you to replace it later. For instance, if required, you only need to adjust the APIs or integration points that connect your application to the underlying infrastructure.

To isolate your apps, you can containerize them, which bakes your application into a repeatable and reproducible container. To further separate your apps from the underlying infrastructure, you can move toward serverless-first development, which includes a serverless architecture. You will be required to re-architect your existing applications to be able to execute on AWS Lambda or Azure Functions or adopt other serverless technologies or services.

While going serverless is recommended in some cases, such as simple CRUD operations or applications with high scaling demands, it’s not a requirement for successful cloud modernization.

Pay attention to your app security

As you begin to incorporate cloud modernization, you’ll need to ensure that any deliverables you ship to your clients are secure and follow a shift-left process. This process lets you quickly provide feedback to your developers by incorporating security checks and guardrails early in your development lifecycle (eg running static code analysis directly after a commit to a feature branch). And to keep things secure at all times during the development cycle, it’s best to set up continuous runtime checks for your workloads. This will ensure that you actively catch future issues in your infrastructure and workloads.

Quickly delivering features, functionality, or bug fixes to customers gives you and your organization more responsibility in ensuring automated verifications in each stage of the software development lifecycle (SDLC). This means that in each stage of the delivery chain, you will need to ensure that the delivered application and customer experience are secure; otherwise, you could expose your organization to data breaches that can cause reputational risk.

Making your deliverables secure includes ensuring that any personally identifiable information is encrypted in transit and at rest. However, it also requires that you ensure your application does not have open security risks. This can be achieved by running static code analysis tools like SonarQube or Checkmarks.

In this blog post, you can read more about the importance of application security in your cloud modernization journey.

Use infrastructure as code and configuration as code

Infrastructure as code (IaC) is an important part of your cloud modernization journey. For instance, if you want to be able to provision infrastructure (ie required hardware, network and databases) in a repeatable way, using IaC will empower you to apply existing software development practices (such as pull requests and code reviews) to change the infrastructure. Using IaC also helps you to have immutable infrastructure that prevents accidentally introducing risk while making changes to existing infrastructure.

Configuration drift is a prominent issue with making ad hoc changes to an infrastructure. If you make any manual changes to your infrastructure and forget to update the configuration, you might end up with an infrastructure that doesn’t match its own configuration. Using IaC enforces that you make changes to the infrastructure only by updating the configuration code, which helps maintain consistency and a reliable record of changes.

All the major cloud providers have their own definition language for IaC, such as AWS CloudFormationGoogle Cloud Platform (GCP) and Microsoft Azure.

Ensuring that you can deploy and redeploy your application or workload in a repeatable manner will empower your teams further because you can deploy the infrastructure in additional regions or target markets without changing your application. If you don’t want to use any of the major cloud providers’ offerings to avoid vendor lock-in, other IaC alternatives include Terraform and Pulumi. These tools offer capabilities to deploy infrastructure into different cloud providers from a single codebase.

Another way of writing IaC is the AWS Cloud Development Kit (CDK), which has unique capabilities that make it a good choice for writing IaC while driving cultural change within your organization. For instance, AWS CDK lets you write automated unit tests for your IaC. From a cultural perspective, this allows developers to write IaC in their preferred programming language. This means that developers can be part of a DevOps team without needing to learn a new language. AWS CDK can also be used to quickly deploy and develop infrastructure on AWS, cdk8s for Kubernetes, and Cloud Development Kit for Terraform (CDKTF).

After adapting to IaC, it’s also recommended to deploy all your configurations as code (CAC). When you use CoC, you can put the same guardrails (ie pull requests) around configuration changes required for any code change in a production environment.

Pay attention to resource usage

It’s common for new entrants to the cloud to miss out on tracking their resource consumption while they’re in the process of migrating to the cloud. Some organizations start with too much (~20 percent) of additional resources, while some forget to set up restricted access to avoid overuse. This is why tracking the resource usage of your new cloud infrastructure from day one is very important.

There are a couple of things you can do about this. The first and a very high-level solution is to set budget alerts so that you’re notified when your resources start to cost more than they are supposed to in a fixed time period. The next step is to go a level down and set up cost consolidation of each resource being used in the cloud. This will help you understand which resource is responsible for the overuse of your budget.

The final and very effective solution is to track and audit the usage of all resources in your cloud. This will give you a direct answer as to why a certain resource overshot its expected budget and might even point you towards the root cause and probable solutions for the issue.

Culture and process recommendations for cloud modernization

How cloud modernization impacts your organization’s culture and processes often goes unnoticed. If you really want to implement cloud modernization, you need to change every engineer in your organization’s mindset drastically.

Modernize SDLC processes

Oftentimes, organizations with a more traditional, non-cloud delivery model follow a checklist-based approach for their SDLC. During your cloud modernization journey, existing SDLC processes will need to be enhanced to be able to cope with the faster delivery of new application versions to the production environment. Verifications that are manual today will need to be automated to ensure faster response times. In addition, client feedback needs to flow faster through the organization to be quickly incorporated into software deliverables. Different tools, such as SecureStack and SUSE Manager, can help automate and improve efficiency in your SDLC, as they take away the burden of manually managing rules and policies.

Drive cultural change toward blameless conversations

As your cloud journey continues to evolve and you need to deliver new features faster or quickly fix bugs as they arise, this higher change frequency and higher usage of applications will lead to more incidents and cause disruptions. To avoid attrition and arguments within the DevOps team, it’s important to create a culture of blameless communication. Blameless conversations are the foundation of a healthy DevOps culture.

One way you can do this is by running blameless post-mortems. A blameless post-mortem is usually set up after a negative experience within an organization. In the post-mortem, which is usually run as a meeting, everyone explains his or her view on what happened in a non-accusing, objective way. If you facilitate a blameless post-mortem, you need to emphasize that there is no intention of blaming or attacking anyone during the discussion.

Track key performance metrics

Google’s annual State of DevOps report uses four key metrics to measure DevOps performance: deploy frequency, lead time for changes, time to restore service, and change fail rate. While this article doesn’t focus specifically on DevOps, tracking these four metrics is also beneficial for your cloud modernization journey because it allows you to compare yourself with other industry leaders. Any improvement of key performance indicators (KPIs) will motivate your teams and ensure you reach your goals.

One of the key things you can measure is the duration of your modernization project. The project’s duration will directly impact the project’s cost, which is another important metric to pay attention to in your cloud modernization journey.

Ultimately, different companies will prioritize different KPIs depending on their goals. The most important thing is to pick metrics that are meaningful to you. For instance, a software-as-a-service (SaaS) business hosting a rapidly growing consumer website will need to track the time it takes to deliver a new feature (from commit to production). However, this metric isn’t meant for a traditional bank that only updates its software once a year.

You should review your chosen metrics regularly. Are they still in line with your current goals? If not, it’s time to adapt.

Conclusion

Migrating your company to the cloud requires changing the entirety of your applications or workloads. But it doesn’t stop there. In order to effectively implement cloud modernization, you need to adjust your existing operations, software delivery process, and organizational culture.

In this roundup, you learned about some best practices that can help you in your cloud modernization journey. By isolating your applications from the underlying infrastructure, you gain flexibility and the ability to shift your workloads easily between different cloud providers. You also learned how implementing a modern SDLC process can help your organization protect your customer’s data and avoid reputational loss by security breaches.

SUSE supports enterprises of all sizes on their cloud modernization journey through their Premium Technical Advisory Services. If you’re looking to restructure your existing solutions and accelerate your business, SUSE’s cloud native transformation approach can help you avoid common pitfalls and accelerate your business transformation.

Learn more in the SUSE & Rancher Community. We offer free classes on Kubernetes, Rancher, and more to support you on your cloud native learning path.

SUSE Rancher wins three awards from TrustRadius

Wednesday, 3 August, 2022

I am delighted to share that SUSE Rancher received three awards from TrustRadius, the trusted research and review platform for business leaders, as the container management solution providing the ‘Best Feature Set,’ ‘Best Value’ and ‘Best Relationship.’

Cloud native solutions are at the forefront of our customers’ strategy, playing an integral role across businesses, from application development to infrastructure modernization projects. By 2025, Gartner estimates that over 95% of new digital workloads will be deployed on cloud native platforms, and as their demands grow, we’re committed to delivering the right open source solutions to help them scale with confidence.  

Today, only SUSE delivers on the key pillars for a successful “Kubernetes Everywhere” strategy and helps our customers address the operational and security challenges of managing certified Kubernetes clusters in the data center, in the cloud and at the edge. It also provides DevOps teams with integrated tools for building and running containerized workloads at scale.

These three awards are attributed to the outstanding work our product, engineering, sales, and services team have done to deliver an industry-leading, reliable and secure Kubernetes management platform that is backed by quality support – and justified by the testimonials from our customers. Here’s what some of our customers have to say: 

 

“As a bank, we are responsible for keeping our customers’ accounts and information secure. We, therefore, have very strict security requirements. We wanted to manage all security details from one place, whether it be network policies, accessibility or simply safeguarding our services.” 

Vazha Pirtskhalaishvili, head of the engineering unit at Bank of Georgia.  

With Kubernetes and SUSE Rancher, we get the scalability, mobility and high availability we need to deliver high-performing solutions to our customers.” 

Vazha Mantua, Deputy CIO at Bank of Georgia 

 “To accelerate the pace of digital transformation, we need to remove operational complexity and ensure consistent, automated and secure processes for developing and deploying containerized applications. That’s exactly what SUSE Rancher is helping us do today.” 

Andreas Rother, Team Leader, Pipeline and Container Services, Gothaer Systems GmbH 

 

Success at SUSE is not limited to SUSE Rancher. Recently, SUSE Linux Enterprise Server (SLES) was recognized by G2 as a “leader” in the Server Virtualization Software space and “High Performer” in Infrastructure-as-a-Service, demonstrating SUSE’s continual commitment to delivering industry-leading open source solutions that are helping our customers build the best-in-class infrastructure to support their organizations.   

Curious to learn more about what we’re up to? You can check out some of the exciting updates from our product and engineering teams from SUSECON Digital 2022 

  

 

Preparing for the Next Wave of Transformation

Tuesday, 2 August, 2022

We’re lucky to work with so many innovative and forward-thinking companies here at SUSE. We see how committed our customers are to tackling their transformation challenges by leveraging open source tools and platforms that allow their developers to quickly build new solutions that drive their organization forward.

The nature of the beast, however, is that transformation is now a continuous program. Both business and IT leaders know that today’s cutting-edge capabilities will quickly be the minimum entry price for tomorrow’s marketplaces. Some of these capabilities now include:

Cloud native platforms: By 2025, Gartner estimates that over 95% of new digital workloads will be deployed on cloud-native platforms, up from just 30% in 2021.
Edge deployment: Over the next two years, organizations will increase their investments at the edge by an average of 37%, according to a recent IDC survey.
Containerised applications in production: By 2026, more than 90% of global organizations will be running containerized applications in production, an increase from less than 40% in 2022, according to Gartner.

When engaging with and hearing from our customers, the above statistics aren’t surprising because they know how crucial cloud native edge deployment and containerized applications are to their new digital ecosystem. The question is: if most companies are on the journey towards adopting these capabilities, how can a competitive advantage be created?

The answer lies in speed and efficiency, meaning that customers are prioritizing solutions that deliver new innovations at a faster pace while consuming fewer resources. At this juncture, development teams are already likely stretched to the limit with current resources, which makes it challenging to move any faster.

For your development teams to successfully deliver the next wave of transformation, they need tools that enable them to focus on the value-added tasks that fuel innovation. As a result, orchestration and automation are fast becoming essential capabilities for these teams.

Pulling the strings

With Kubernetes now acting as the default language of hybrid multi-cloud, development and IT teams now need the ability to manage this complex ecosystem for delivering:

• Consistent reliability on any infrastructure
• Faster and more efficient DevOps with standardized automation
• Enforcing security policies on any infrastructure

Unfortunately, we still speak to many of these teams who lack the visibility and consistency to manage their Kubernetes clusters independently. If we’re asking these teams to be the engine of growth and innovation, these intentions need to be matched with investments in next-gen tools with automation and orchestration at the core of functionality.

The ROI of orchestration platforms extends beyond simply being able to run your DevOps program more efficiently. With skills and talent shortages set to continue, technology specialists will be looking for those organizations that are walking the talk by delivering the best available tools and platforms.

No matter where the next wave of transformation takes your business, it will ultimately be your people that enable you to stay one step ahead of your competitors. Rather than asking development and IT teams to do more with less, future-focused organizations will instead be asking how they enable their talent to be performing at their best.

About the author

As the Chief Operating Officer for SUSE APAC, I’m focused on enabling our team to deliver on the strategic vision for delivering cutting-edge Open-Source solutions that allow our customers to Innovate Everywhere. With more than a decade of experience as a senior executive and strategic consultant across the enterprise technology sector, I bring my expertise from sectors such as supply chain, construction and engineering to understand the complex challenges our customers are facing and how we can be best positioned to assist in their ongoing transformation.

Navigating the rapidly evolving world of retail

Monday, 25 July, 2022

In today’s highly competitive post-Covid retail landscape, efficient operations which enable retailers to deliver the most competitive prices and enhance customer experience, have become essential to the sector.

Retail organisations that can adapt quickly to market changes and enhance customer services, whilst reducing operating costs, are the companies that will grow and prosper – whilst other market participants risk crumbling under increased competition from online retailers.

Retailers are facing stark competition from e-commerce giants, and a sharp drop in footfall due to COVID-19. To survive, many have invested in new scalable online stores and big data analytics. As Deloitte stated, “The online and digital retail world is no longer the sole preserve of the agile startup or online pureplay business. We have begun to see the major established retail businesses fight back by embracing digital themselves. The modern retailer’s journey into digital sees them adapting their core, exploring digital products and experiences.”

To keep up with the pace of change required to remain relevant and competitive, retailers are having to evolve their IT strategy at a previously unimaginable pace.

Simply put, retailers must drive rapid digital transformation to build differentiating omni-channel customer experiences.

Retailers need to retire legacy systems as part of digital transformation. They must think about new store concepts which provide customers with convenience and exceptional experiences, including self-checkout devices and the latest POS hardware. Office Depot, LOTTE Department Store, and ElectronicPartner, are just some examples of retail brands that have done this successfully.

Agile delivery of seamless in store experiences for customers, along with an integration between online and physical offerings, has shifted from novelty to necessity.

For years, the retail industry has focussed primarily on reducing cost and increasing operational efficiency. But now, with the power of choice back in the hands of consumers, retailers must be willing to speed up innovation, focussed on customer experience versus margin expansion.

To achieve this level of modernisation, retailers need to consolidate IT operations and rethink their strategies.

Forward-thinking retailers embrace a more innovative way to build, deliver and manage applications across their entire enterprises, understanding that this is the only way to create the customer experiences they need to win. They’ve also realised that no matter what you’re selling, technology acts as a powerful differentiator in delivering experiences to customers.

At SUSE we help retailers around the world such as Office Depot, LOTTE Department Store, and ElectronicPartner to build the IT operations they need to understand customer behaviour, anticipate needs, and offer personalised services in an environment protected by robust security.

Whether it is achieving zero downtime and saving millions that can be reinvested into digital transformation, or driving increased stability and simplicity that creates more time for innovation – SUSE has a big role to play.

The future of technology holds so much potential. We are entering an era where the worlds of cloud and edge computing will collide, which will generate numerous possibilities to make businesses more agile and efficient.

For retailers, this means edge users will be able to push the cloud to local devices irrespective of location. This year, there will be increases in new, purpose-built apps and devices, which will extend digital transformation even further to warehouses, restaurants, retail forefronts and more.

SUSE and SUSE Rancher products are powering this retail revolution, shaping Retail 2.0 and giving ambitious market participants a competitive edge and the ability to achieve sustainable growth. To learn more about how SUSE is helping retail customers innovate from the core to the cloud to the edge and beyond – as well as achieve their business goals – click here and watch this video.

 

 

 

 

Managing Your Hyperconverged Network with Harvester

Friday, 22 July, 2022

Hyperconverged infrastructure (HCI) is a data center architecture that uses software to provide a scalable, efficient, cost-effective way to deploy and manage resources. HCI virtualizes and combines storage, computing, and networking into a single system that can be easily scaled up or down as required.

A hyperconverged network, a networking architecture component of the HCI stack, helps simplify network management for your IT infrastructure and reduce costs by virtualizing your network. Network virtualization is the most complicated among the storage, compute and network components because you need to virtualize the physical controllers and switches while dividing the network isolation and bandwidth required by the storage and compute. HCI allows organizations to simplify their IT infrastructure via a single control pane while reducing costs and setup time.

This article will dive deeper into HCI with a new tool from SUSE called Harvester. By using Kubernetes’ Container Network Interface (CNI) mechanisms, Harvester enables you to better manage the network in an HCI. You’ll learn the key features of Harvester and how to use it with your infrastructure.

Why you should use Harvester

The data center market offers plenty of proprietary virtualization platforms, but generally, they aren’t open source and enterprise-grade. Harvester fills that gap. The HCI solution built on Kubernetes has garnered about 2,200 GitHub stars as of this article.

In addition to traditional virtual machines (VMs), Harvester supports containerized environments, bridging the gap between legacy and cloud native IT. Harvester allows enterprises to replicate HCI instances across remote locations while managing these resources through a single pane.

Following are several reasons why Harvester could be ideal for your organization.

Open source solution

Most HCI solutions are proprietary, requiring complicated licenses, high fees and support plans to implement across your data centers. Harvester is a free, open source solution with no license fees or vendor lock-in, and it supports environments ranging from core to edge infrastructure. You can also submit a feature request or issue on the GitHub repository. Engineers check the recommendations, unlike other proprietary software that updates too slowly for market demands and only offers support for existing versions.

There is an active community that helps you adopt Harvester and offers to troubleshoot. If needed, you can buy a support plan to receive round-the-clock assistance from support engineers at SUSE.

Rancher integration

Rancher is an open source platform from SUSE that allows organizations to run containers in clusters while simplifying operations and providing security features. Harvester and Rancher, developed by the same engineering team, work together to manage VMs and Kubernetes clusters across environments in a single pane.

Importing an existing Harvester installation is as easy as clicking a few buttons on the Rancher virtualization management page. The tight integration enables you to use authentication and role-based access control for multitenancy support across Rancher and Harvester.

This integration also allows for multicluster management and load balancing of persistent storage resources in both VM and container environments. You can deploy workloads to existing VMs and containers on edge environments to take advantage of edge processing and data analytics.

Lightweight architecture

Harvester was built with the ethos and design principles of the Cloud Native Computing Foundation (CNCF), so it’s lightweight with a small footprint. Despite that, it’s powerful enough to orchestrate VMs and support edge and core use cases.

The three main components of Harvester are:

  • Kubernetes: Used as the Harvester base to produce an enterprise-grade HCI.
  • Longhorn: Provides distributed block storage for your HCI needs.
  • KubeVirt: Provides a VM management kit on top of Kubernetes for your virtualization needs.

The best part is that you don’t need experience in these technologies to use Harvester.

What Harvester offers

As an HCI solution, Harvester is powerful and easy to use, with a web-based dashboard for managing your infrastructure. It offers a comprehensive set of features, including the following:

VM lifecycle management

If you’re creating Windows or Linux VMs on the host, Harvester supports cloud-init, which allows you to assign a startup script to a VM instance that runs when the VM boots up.

The custom cloud-init startup scripts can contain custom user data or network configuration and are inserted into a VM instance using a temporary disc. Using the QEMU guest agent means you can dynamically inject SSH keys through the dashboard to your VM via cloud-init.

Destroying and creating a VM is a click away with a clearly defined UI.

VM live migration support

VMs inside Harvester are created on hosts or bare-metal infrastructure. One of the essential tasks in any infrastructure is reducing downtime and increasing availability. Harvester offers a high-availability solution with VM live migration.

If you want to move your VM to Host 1 while maintaining Host 2, you only need to click migrate. After the migration, your memory pages and disc block are transferred to the new host.

Supported VM backup and restore

Backing up a VM allows you to restore it to a previous state if something goes wrong. This backup is crucial if you’re running a business or other critical application on the machine; otherwise, you could lose data or necessary workflow time if the machine goes down.

Harvester allows you to easily back up your machines in Amazon Simple Storage Service (Amazon S3) or network-attached storage (NAS) devices. After configuring your backup target, click Take Backup on the virtual machine page. You can use the backup to replace or restore a failed VM or create a new machine on a different cluster.

Network interface controllers

Harvester offers a CNI plug-in to connect network providers and configuration management networks. There are two network interface controllers available, and you can choose either or both, depending on your needs.

Management network

This is the default networking method for a VM, using the eth0 interface. The network configures using Canal CNI plug-ins. A VM using this network changes IP after a reboot while only allowing access within the cluster nodes because there’s no DHCP server.

Secondary network

The secondary network controller uses the Multus and bridge CNI plug-ins to implement its customized Layer 2 bridge VLAN. VMs are connected to the host network via a Linux bridge and are assigned IPv4 addresses.

IPv4 addresses’ VMs are accessed from internal and external networks using the physical switch.

When to use Harvester

There are multiple use cases for Harvester. The following are some examples:

Host management

Harvester dashboards support viewing infrastructure nodes from the host page. Kubernetes has HCI built-in, which makes live migrations, like Features, possible. And Kubernetes provides fault tolerance to keep your workloads in other nodes running if one node goes down.

VM management

Harvester offers flexible VM management, with the ability to create Windows or Linux VMs easily and quickly. You can mount volumes to your VM if needed and switch between the administration and a secondary network, according to your strategy.

As noted above, live migration, backups, and cloud-init help manage VM infrastructure.

Monitoring

Harvester has built-in monitoring integration with Prometheus and Grafana, which installs automatically during setup. You can observe CPU, memory, storage metrics, and more detailed metrics, such as CPU utilization, load average, network I/O, and traffic. The metrics included are host level and specific VM level.

These stats help ensure your cluster is healthy and provide valuable details when troubleshooting your hosts or machines. You can also pop out the Grafana dashboard for more detailed metrics.

Conclusion

Harvester is the HCI solution you need to manage and improve your hyperconverged infrastructure. The open source tool provides storage, network and computes in a single pane that’s scalable, reliable, and easy to use.

Harvester is the latest innovation brought to you by SUSE. This open source leader provides enterprise Linux solutions, such as Rancher and K3s, designed to help organizations more easily achieve digital transformation.

Get started

For more on Harvester or to get started, check the official documentation.

The Container Age Has Security-To-Go as Part of its Supply Chain

Wednesday, 20 July, 2022

The microservice deployment and management stack is proving very effective for companies taking advantage of the cloud’s capabilities to scale and adapt. Containers (often alongside Kubernetes) sit on top of this elastic fabric with agile DevOps and CI/CD workflows that transition code from development to production in short timescales.

A significant problem with the speed of transition from home lab environments to production in just a few years is that container technology is generally DevOps, and not SecOps-focused. The collegiate atmosphere of trust in the broader development community has not so much turned a blind eye to bad actors, but simply not considered the implications of malevolent players’ potential activities.

Last December’s critical severity vulnerability Log4Shell is a good example. This vulnerability allows attackers to remotely execute malicious code on systems that are running certain versions of the Log4j2 Java logging framework. In less than a week, there were almost 1.3 million attempts to exploit the flaw on over 44% of corporate networks globally.

Today’s cyber-attacks are becoming increasingly sophisticated. Attackers only need a single vulnerability to exploit and even the most fortified of systems can be compromised. Forrester’s research found that, in 2021, 35% of attacks exploited software vulnerabilities and 32% obtained unauthorized access using supply chains and third parties. 32% of attacks used an application exploit.

Traditional security practices focusing on exceptions, deny lists, signatures, and vulnerability scanning are insufficient as they tend to be reactive, focus only on known issues and are unable to scale. In addition, security tools which work based on the premise of a pre-defined security perimeter would not be suitable for containerized applications. The speed and ease of creating virtual networks, hundreds of container pods with ephemeral IP addresses and Kubernetes clusters distributed across data centers, cloud and edge environments blur the notion of a single security perimeter.

Instead, we must adopt a proactive approach and implement zero trust security controls. This means untrusting all activities by default. Then explicitly declare what is acceptable and provide the least number of privileges to your containerized applications. Anything anomalous to what is defined as acceptable has to be blocked. In essence, you are defining multiple micro security perimeters for your containerized applications.

The emergence of DevSecOps roles in many workplaces (CAGR of over 24% in roles in the sector is expected to 2028) shows that many companies are aware that there’s good potential for combining security with your CI/CD pipeline. By shifting security left all the way to the earliest stage of the pipeline, you can dramatically improve efficiency, decrease cost, and produce secure applications.

Right from when container technology emerged, native best-of-breed security platforms designed for cloud native applications started to appear. SUSE NeuVector is one of the best-known among these. Its lightweight presence in Kubernetes environments protects applications throughout the container lifecycle from development, QA, and production environments. With NeuVector, companies can easily use policy-as-code to create zero-trust container environments that are actively scanned for vulnerabilities. It can inspect your container traffic in real-time to identify attacks, protect sensitive data, and verify application access to minimize the attack surface. The plus side here for developers is that protection can be assured across the CI/CD pipeline by relatively trivial changes to configuration files. Once achieved, the development environment can be addressed as usual.

To deliver secure digital experiences and gain customer trust, companies must pursue the highest standards in both development and security practice and be prepared for all types of threat vectors. In cloud native development cycles, security must be a concern right from the onset, but it needn’t hinder the agility that cloud native technology offers. Cybersecurity platforms such as NeuVector create a self-learning, zero-trust environment that makes supply chain security simple, from Dev to Production.

Learn more about, SUSE NeuVector.

About the Author

Vishal Ghariwala is the Chief Technology Officer for the APJ and Greater China regions for SUSE, a global leader in true open source solutions. In this capacity, he engages with customer and partner executives across the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also has a global charter with the SUSE Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.

Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.

Vishal has over 20 years of experience in the Software industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.

Vishal is here on LinkedIn: https://www.linkedin.com/in/vishalghariwala/