How to Enforce Egress Container Security Policies in Kubernetes, OpenShift, and Istio

Thursday, 30 July, 2020

Prevent Data Breaches and Unauthorized External Connections from Container Clusters with Egress Control

By Gary Duan

While more and more applications are moving to a microservices and container-based architecture, there are legacy applications that cannot be containerized. External egress from a container cluster to these applications needs to be secured with egress container security policies when containers are deployed with Kubernetes or Red Hat OpenShift.  In addition, modern container applications are frequently built requiring API access to services running outside the cluster, even on the internet. Updates of open source software applications and operating systems also may require internet access. These modern and legacy applications include SaaS-based API services, internal database servers and applications developed with .NET frameworks. The cost and risk to migrate these applications to a microservice architecture is so high that many enterprises have a mixed environment where new containerized applications and legacy applications are running in parallel.

Application segmentation is a technique to apply access control policies between different services to reduce their attack surface. It is a well-accepted practice for applications running in virtualized environments. In a mixed environment, containerized applications need access to the internet and/or legacy servers. DevOps and security teams want to define egress control policies to limit the exposure of external connections to the internet and legacy applications. In Kubernetes, this can be achieved by Egress Rules of the Network Policy feature.

This article discusses several implementations of egress control policy in Kubernetes and Red Hat OpenShift and introduces the NeuVector approach for overcoming limitations with basic Network Policy configurations.

Egress Control for Container Security with Network Plugins

In Kubernetes 1.8+, an Egress policy can be defined as something like this,

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: app
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.10.0.0/16
    ports:
    - protocol: TCP 
      port: 5432

This example defines a network policy that allow containers with the label ‘role=app’ to access TCP port 5432 of all servers in subnet 10.10.0.0/16. All other egress connections will be denied.

This achieves what we want to some extent, but not all network plugin support this feature. Checking  this link, we can see that Calico, Canl and Weave Net are some of the network plugins that support this feature commercially.

However, using a subnet address and port to define external services is not ideal. If these services are PostgreSQL database clusters, you really want to be able to specify only those database instances using their DNS names, instead of exposing a range of IP addresses. Using port definitions is also obsolete in a dynamic cloud-native world. A better approach is to use application protocols, such as ‘postgresql,’ so only network connections using the proper PostgreSQL protocols can access these servers.

Here is a summary of the limitations of egress control with Network Policy.

  • Allow only rules, no Deny rules to specific IPs (only Deny_all)
  • No concept of ‘external’ outside cluster, namespace egress only which includes outside of cluster
  • No rule prioritization / order for adjusting the order in which firewall rules are hit
  • No Hostname (DNS name) support, IP address only

Egress Controls with Red Hat OpenShift

OpenShift provides enhancements on the egress controls in native Kubernetes Network Policy by defining a custom resource which implements an egress firewall for egress outside of the cluster. This CRD is called the EgressNetworkPolicy and is deployed by default in OpenShift.

In OpenShift you can create one egress control policy per namespace, and not on the default namespace. All egress rules for the namespace need to be in this policy declaration. Here’s an example  that allows egress to google.com, cnn.com and others but denies access to yahoo.com.

OpenShift egress control supports dns names, IP addresses, rule ordering, and deny rules. Limitations of OpenShift include:

  • Does not apply to routes (routers used for external access) so egress connections through routes will by egress controls
  • Namespace only, no pod selectors to further refine egress controls
  • No application protocol verification (e.g. mysql, …) to further secure connections by layer7 application protocol (this also is a limitation in network policy)
  • Limited rule management, where the order of definition in the yaml file is the method and can’t be compared to other global rules

Egress Controls with Istio

Istio is an open-source project that creates a service mesh among microservices and layers onto Kubernetes or OpenShift deployments. It does this by “deploying a sidecar proxy throughout your environment”. In the online documents, they give this example for Egress policy.

apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
  name: googleapis
  namespace: default
spec:
  destination:
      service: "*.googleapis.com"
  ports:
      - port: 443
        protocol: https

This rule allows containers in the ‘default’ namespace to access subdomains of googleapis.com with https protocol on port 443.

The article cites two limitations,

  1. Because HTTPS connections are encrypted, the application has to be modified to use HTTP protocol so that Istio can scan the payload.
  2. This is “not a security feature“, because the Host header can be faked.

To understand these limitations, we should first examine how a HTTP request looks like.

GET /examples/apis.html HTTP/1.1
Host: www.googleapis.com
User-Agent: curl/7.35.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate

In the above example, you can see that when you type,

curl http://www.googleapis.com/examples/apis.html

in the first portion of the URL, ‘www.googleapis.com’, does not appear in the first line of the HTTP request, which only contains the path. It shows up as the Host header. This header is where Istio looks for to match the domain defined in its network policy. If HTTPS is used, everything here is transmitted encrypted so Istio cannot find it. A compromised container can replace the header and trick Istio to allow traffic to a malicious IP.

In fact, the third limitation that the documentation doesn’t mention is that this approach only works with HTTP. We cannot define a policy to limit access to our PostgreSQL database cluster.

Istio Egress Gateway

An alternative method of egress control in Istio is to funnel all egress traffic through an egress gateway running within the cluster. However, while this is more secure than the egress control through the sidecar proxy, it can be bypassed and is prone to configuration errors. From the Istio docs a caution reads “Istio cannot securely enforce that all egress traffic actually flows through the egress gateway…”

This means that in order to make sure all egress traffic flows through the gateway, additional controls are required. For example, Kubernetes network policy, as discussed first, could be used to block egress except through the gateway. This adds complexity and potential for misconfiguration that could lead to connections leaking out of the cluster without the admin team knowing about it.

Egress Controls for Container Security with NeuVector

Once the NeuVector Enforcer container is deployed on a container host, it automatically starts monitoring the container network connections on that host with DPI technology (deep packet inspection). DPI enables NeuVector to discover attacks in the network. It can also be used to identify applications and enforce a network policy at the application level when deployed as a complete Kubernetes security solution.

Once a network policy with the whitelisted DNS name(s) is pushed to the Enforcer, the Enforcer’s data path process starts to silently inspect the DNS requests issued by the containers. The Enforcer does not actively send any DNS request, so no additional network overhead is introduced by NeuVector.  It parses DNS requests and responses, and correlates the resolved IP with the network connections made by the specified containers. Based on the policy defined, the Enforcer can allow, alert or deny the network connections.

This approach works on both unencrypted and encrypted connections, and it will not be tricked by any faked headers. It can be used to enforce policies on any protocol, not just HTTP or HTTPS. More importantly, because of the DPI technology, the Enforcer can inspect the connections and make sure only the proper PostgreSQL protocol is used to access the PostgreSQL databases.

Using DPI/DLP to Enforce Egress Control Through a Traditional Proxy

Another more advanced use case is to route traffic through a proxy such as a squid proxy running outside the cluster. This presents the challenge of how to distinguish connections which should be allowed versus those which should be blocked based on the destination of the connection, but enforcing the policy at the source container/pod.

In the example below, connections to external resources at morningstart.com should be allowed, while oracle.com should be blocked. However, all connections must go through a squid proxy running outside the cluster.

The challenge is that we want to enforce egress control from within the cluster, at the source container/pod, so that we have the most flexibility to define which container sources should be allowed to access which egress destinations.

In order to accomplish this we can use the NeuVector deep packet inspection (DPI) capability and data loss prevention (DLP) feature to inspect the http headers in the outbound connection and allow morninstar.com while blocking oracle.com.

Secure, Scalable and Flexible Egress Control for Containers in Kubernetes and OpenShift

Kubernetes and OpenShift run-time container security should include control of ingress and egress connections to legacy and API-based services. While there are basic protections built into Kubernetes, Openshift, and Istio, business critical applications need enhanced security provided by Layer 7 DPI filtering to enforce egress policies based on application protocols and DNS names.

Egress control policies, like other run-time security policies, should support automation and integration through resources like the Kubernetes Customer Resource Definition (CRD) which enable DevOps teams to declare policies through Security Policy as Code.

Watch the webinar recording of the topics in this post including hands-on demo’s.

Global Energy Leader Transforms Technology and Culture with Kubernetes

Wednesday, 29 July, 2020

“When I look at the most advanced digital organizations such as Google, Netflix, Amazon and Facebook, they’re running service-orientated architectures, with estates of microservices, completely decoupled from one another but managed centrally. We aspire to reach this point and Rancher is an important part of the journey.” Anthony Andrades, Head of Global Infrastructure Strategy, Schneider Electric

When your company is born in the first Industrial Revolution, how do you stay relevant in the digital age? For Schneider Electric, the answer is continuous innovation, driven by its heritage in the electricity market. Founded in the 1880s, Schneider Electric is a leading provider of energy and automation digital solutions for efficiency and sustainability. Believing access to energy and digital services is a basic human right, Schneider Electric creates integrated solutions for homes, commercial and municipal buildings, data centers and industrial infrastructure. By putting efficiency and sustainability at the heart of the portfolio, the company helps consumers and businesses make the most of their energy resources.

A Digital Transformation Turning Point

Today, Schneider Electric is at a turning point – embarking on a significant transformation by modernizing its legacy systems to create a cluster of cloud-native microservices to become more agile and innovative. The company started its move to the cloud in 2013, with a couple of business-driven projects running on Amazon Web Services (AWS). By 2016, their AWS footprint was global, and an infrastructure migration was underway. At the same time, they were experimenting with Kubernetes but faced some challenges with access control.

In 2018, the company carried out a successful proof of concept (PoC) with Rancher Labs and security partner Aqua. This resulted in deploying Rancher on top of Kubernetes to provide access control, identity management and globalized performance metrics. A year later, Schneider chose Rancher to underpin its container management platform, deploying it on 20 nodes.

The company has been undergoing technical evolution for 25 years, in which they built and deployed thousands of separate services and applications running on Windows Server or Red Hat. Now these services must be re-engineered or rebuilt before migrating to the cloud – a process that they expect to take five years. In 2019, the team started the painstaking process of analyzing the entire estate of applications, categorizing each one according to the most appropriate and efficient way to modernize and migrate.

Successful Migration to Rancher

Over the last year, the team has successfully migrated four applications, which are now managed in 40 nodes with Rancher. With Rancher’s intuitive interface, the team can quickly check the status of clusters without having to manually check performance, workload status or resource usage. The team appreciates that they don’t need to worry about the underlying infrastructure.

Read our case study to hear more about Schneider Electric’s technical and cultural transformation and why their relationship with Rancher is critical for success.

The Power of Innovation

Tuesday, 21 July, 2020
Learn more about Rancher’s innovative approach to Kubernetes management

CEO and Co-Founder Sheng Liang has a saying about how we approach open source at Rancher Labs: “Let a thousand flowers bloom.” When we set out to build something, we don’t know if it will turn into a successful product, spark another product idea or be a good idea that doesn’t get traction. The joy is in the journey.

Take K3s, our lightweight Kubernetes distribution. We didn’t start out developing K3s – it grew organically out of a project called Rio. K3s was inspired by the insight and passion of our developers who saw a need for a Kubernetes distribution for IoT and the edge. These forward thinkers were right. According to Gartner, 75 percent of enterprise data will be created and processed outside of data centers and cloud deployments by 2025.

CRN Recognizes Rancher Labs for Innovation

K3s has influenced other innovative products in the open source community, such as k3sup and k3d. We’re proud that K3s has gained a loyal following and that it continues win accolades. CRN recently included K3s in its roundup of The 10 Coolest Open-Source Software Tools of 2020 (So Far) due to its small binary (under 40 MB) that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster. CRN recognized K3s and other tools on the list, such as Helm and Envoy, for leading the industry toward greater adoption of agile development and DevOps methods, AI, cloud-native architecture and advanced security.

No doubt the buzz around K3s influenced CRN to include Rancher Labs in another list: the 2020 Emerging Vendors. Honoring “rising technology suppliers that exhibit great promise in shaping the future success of the channel with their dedication to innovation,” CRN’s list provides a resource for solution providers in search of the latest technologies.

At Rancher Labs, partnerships are crucial to our business. We rely on more than 200 solution providers worldwide who deliver customer offerings around our products, including Rancher, our enterprise platform for Kubernetes management. These partners are also innovators, inspired to help their customers do things better.

Being an innovator means knowing you can’t go it alone –- and our ecosystem partners are critical to that. Rancher includes a global catalog of applications that our users can easily integrate into their environments to maximize productivity and reliability.

At the end of the day, we want to make our users’ lives better by making Kubernetes clusters easier to deploy and manage so that they can focus on the business at hand. For most organizations, that’s what innovation is: finding better ways to solve problems.

The idea of letting a thousand flowers bloom means being able to evolve our technologies and take the best parts of the things we’ve developed. Sometimes it means admitting that a technology you love isn’t going to make it. It’s having insight into what technology will best solve customer problems and driving adoption of that technology. For Rancher Labs, embracing open source means embracing the best of innovation –- no matter where it comes from. That’s why we’re 100 percent open source with no vendor lock-in.

As we look to the future of Rancher Labs, one thing is sure. Innovation will continue to be a driving force in everything we do. We’ll continue to plant the seeds of innovation and watch them grow.

Learn more about Rancher’s innovative approach to Kubernetes management
Tags: ,,,,, Category: Products, Rancher Kubernetes Comments closed

SUSE to Acquire Rancher Labs, Creating World’s Largest Organization Exclusively Dedicated to Powering Digital Transformation With Open Source and Cloud Native Solutions

Wednesday, 8 July, 2020

Today, SUSE embarks on a new chapter in our incredible 28-year journey. I am thrilled to share that, SUSE has signed a definitive agreement to acquire Rancher Labs, a market-leading, Enterprise Kubernetes Management vendor based in Cupertino, California.    

This is an incredible moment for SUSE and for our industry, as two open source leaders join forces to create the world’s largest independent organization dedicated exclusively to powering digital transformation with open source and cloud native solutions. 

I want to share my perspective on why we chose Rancher, and how this acquisition will benefit our customers, partners, and communities.  

Why Rancher? 
Rancher provides a marketleading Enterprise Kubernetes Management platform and enables computing everywhere with seamless deployment of containerized workloads from the core to the edge to the cloud. Like SUSE, Rancher is 100% open source and equally as passionate as SUSE about true open source innovation, community empowerment, and customer successSUSE and Rancher share the same goal – happy and satisfied customers. 

By combining Rancher’s strength in Containers with our strengths in Enterprise Linux, Edge Computing, and AI, we will redefine and jointly disrupt the market to help customers accelerate their digital transformation journeys with true open source solutions. There is no doubt in my mind, with our first acquisition as an independent company, we are paving the way for two leading companies with so many complementary strengths to become even stronger together. 

What does this mean for SUSE and Rancher’s customers and partners? 
Following the receipt of all necessary regulatory and antitrust approvals and the acquisition’s closingcustomers and partners from both companies will benefit from the following:  

  • A much broader best-in-class product portfolio and the vastly increased innovation power and global presence of the combined companies 
  • Rancher’s 100% open source solutions means we will continue to meet our customers where they are on their digital transformation journey – defined by what success looks like to them  while continuing our promise of no vendor lock-in. 
  • Future versions of SUSE’s CaaS Platform will be based on the innovative capabilities provided by Rancher, and we’ll ensure efforts to upgrade CaaS Platform are as smooth as possible. 
  • This combination is also a huge win for SUSE’s global partner ecosystem who will now be able to provide an even broader range of solutions to their customers with both Rancher and joint solutions.  
  • SUSE and Rancher’s developers will jointly collaborate on innovative open source solutions and projects to accelerate the pace of innovation and bring better choice to the market benefiting both customers and partners, and our communities 

I am joined in our philosophy of growth and innovation by Rancher and its CEO – you can read Sheng Liang’s blog for his take on this acquisition.

What does this mean for the open source community?  

From day oneboth SUSE and Rancher have had a shared commitment to providing 100% true open source innovations to their customers around the world. Just as SUSE was founded by the Power of Many, Rancher’s heritage is deeply rooted in the ethos of the open source community.  

SUSE’s commitment to the open source community remains as strong today as it was 28 years ago Rancher will continue its strategy to be open and support multiple Kubernetes distributions and operating systems including any Cloud Native Computing Foundation-certified Kubernetes distribution including Google GKE, Amazon EKS, and Microsoft AKS, as well as projects like Gardener.  

What are the next steps? 
We anticipate the acquisition to close before the end of October 2020, subject to customary closing conditions including the receipt of regulatory approvals. During the regulatory period, that is prior to closing, SUSE and Rancher will continue to operate independently. Until closing, it is business as usual for SUSE and Rancher. After closing, we will be able to share more details on our product integration plans and various partner and community program plans.  

If you have any questions, I encourage you to reach out to your SUSE account representative    

I know SUSE’s next chapter, which will include Rancherwill drive an abundance of value for our customers, partners, and communities. Together we are stronger, and we will be unstoppable.  

Be sure to check out what industry analysts are saying in this IDC report and 451 Research report.

SUSE Enters Into Definitive Agreement to Acquire Rancher Labs

Wednesday, 8 July, 2020

Read our free white paper: How to Build a Kubernetes Strategy

I’m excited to announce that Rancher has signed a definitive agreement to be acquired by SUSE. Rancher is the most widely used enterprise Kubernetes platform. SUSE is the largest independent open source software company and a leader in enterprise Linux. By combining Rancher and SUSE, we not only gain massive engineering resources to further strengthen our market-leading product, we are also able to preserve our unique 100% open source business model.

We started Rancher 6 years ago to develop the next generation enterprise computing platform built on a relatively new technology called containers. We could not have anticipated the tremendous growth and popularity of the Kubernetes technology. Rancher was able to thrive in this exciting and highly dynamic market because we developed innovative products loved by end users. Grass-roots adoption coupled with a unique enterprise-grade support subscription led to our hypergrowth. I want to thank everyone who has used our products over these last six years for your support, and for helping us build an amazing community of users.

After the acquisition closes later this year, I will lead the combined engineering and innovation organization at SUSE. You can expect an accelerated pace of product innovation. And given SUSE’s 28-year history building a highly successful open source business, our commitment to open source will remain strong.

The acquisition is great for Rancher customers and partners. At Rancher we take pride in our industry-leading customer satisfaction with an NPS score of over 80. SUSE’s global reach and enterprise focus will further strengthen our commitment to customers who rely on Rancher to power mission-critical workloads. Likewise, SUSE’s strong ecosystem will greatly accelerate Rancher’s on-going efforts to transform how organizations adopt cloud native technology.

This acquisition is a launch point for further growth of Rancher. I feel as invigorated as day-1 about the industry, the technology, and our business. I am so proud of our team and the work they have done these last six years, and I look forward to continuing to work with our users, customers, partners, and fellow Ranchers to build a truly amazing business by leveraging the best parts of Rancher and SUSE. Rancher and SUSE together will be the enterprise computing company that transforms our industry.

Read our free white paper: How to Build a Kubernetes Strategy

Tags: ,, Category: Uncategorized Comments closed

How to Protect Secrets in Containers Using DPI and DLP

Thursday, 4 June, 2020

Every cloud application and service utilizes a key (secret) to identify and authorize communications. Secrets are also used to authorize access to containerized applications which require a login. These credentials are widely used by public facing services as well as internal and external REST API’s everywhere. Examples include the AWS IAM access key, Google API access token, Twitter API key, LinkedIn API ID, Facebook access token, Flickr API access token, OAuth Client Secrets and the list goes on.

Even in a well configured service environment, some of these secrets will be able to get authorization to access sensitive data because the service requires it to perform its functions. Examples are the ability to read database records, create new files or even delete data or files. So it should be obvious that these secrets should be carefully managed and stored, because any leakage or misuse of them can cause damaging data breaches or other security issues. Here are some real incidents involving compromised incidents:

On September 2017, DXC’s AWS private keys were compromised from a public GitHub space, ‘Unknown persons’ spin up 244 VMs at cost of $64k in just couple of days. “Various secure variables (cryptographic keys that allowed access to DXC procured Amazon Web Services resources) were hardcoded into a piece of work being shared between multiple teams and with the project architect.” Then on September 27, a member of the technical team created a personal space on the public Github, and the code was loaded to this unsecured repository that allowed individuals as yet unknown to access and use it. “Over a period of four days, the private keys were used to start 244 AWS virtual machines. The cost incurred was $64,000 (£48,799).”

In 2018, hackers found that an Uber engineer mistakenly left the credentials in a GitHub repository. This secret can be used to access an Amazon web server which was owned by Uber. The hacker accessed the server and downloaded more than a dozen files which included a backup file which contained millions of Uber customer data records. Hackers were asking for six-figure payout. Since the breach was discovered, not only did Uber pay $148 million for the data breach settlement, but they also agreed to adopt a comprehensive security auditing practice. This extended into all aspects of the company’s operations, including the development team, DevOps, IT and security groups. Each function and process was required to be audited and adjusted for compliance. Uber’s business was impacted and their IPO was delayed.

Solutions for Secrets Management

Secrets management and security solutions to provide this are not new. Various solutions are available in the market. There are also types of open source tools available. For example,

  • Open Source git-secrets from AWSLabs  scans commits and merges to prevent a developer from committing passwords and other sensitive information to a git repository.
  • detect-secrets is a tool to detect and prevent secrets in code. It was built for preventing new secrets from entering the code base, detecting if such preventions are explicitly bypassed, and providing a checklist of secrets to roll, and migrate off to a more secure storage.
  • HashiCorp Vault is one solution to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.
  • Kubernetes as the de facto container orchestration platform also has built-in capabilities of secrets management. Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
  • In Red Hat OpenShift, which is a popular enterprise-grade Kubernetes platform, secrets management functions are available.

Secrets Auditing Solutions

In general, today most secrets auditing security tools can help find secrets from source code repositories, container images or in CI/CD pipeline. Container platforms and orchestrators will help encrypt, distribute and manage secrets when services are deployed. Secrets management is part of the control plane and secrets are secured together within system containers/services by default. These are typically secure enough to serve standard secrets protection needs.

But for highly sensitive workloads, for example Uber’s customer database backend service, standard image scanning, data encryption, secured secret stores and secret distribution is not enough. For these use cases, additional “defense in depth” protection is needed.

Using NeuVector to Audit and Find Secrets

Here is an example of how NeuVector can help to find secrets in running pods to protect mission critical secrets in the runtime production environment:

  • NeuVector is fully integrated with Kubernetes platforms, it will leverage the Kubernetes platform to manage and distribute the application secrets. So there’s no 3rd party plugins or deployments needed.
  • In Policy -> Group, define a “CUSTOM CHECK” for the web server application which will access the backend database and storage services. This web server is reading and writing sensitive data by design. Adding a custom check script which will scan the web server pods for any AWS secret access keys.
  • As soon as the new custom check is saved, NeuVector will automatically scan the running containers in this group. It reports and takes action accordingly when an AWS access secret is detected.
  • At same time, a security incident will be generated and logged, which will have all the detailed information to help fix the source of the violation.

NeuVector will scan for secrets at run time and at scale, even when services are scaling up or down. The NeuVector platform will automatically discover any changes and apply all security checks automatically. Response rules can also be applied to trigger post actions when finding a secrets in containers.

Note: This extension capability can be used to support scanning for any Regex pattern-based secrets, for example AWS, Facebook, LinkedIn, Twitter, Foursquare etc. Please contact NeuVector support if you are interested in various pattern examples.

Using NeuVector DPI/DLP to Inspect Network Transmission of Secrets

In addition to scanning for secrets, NeuVector can also apply unique container DPI/DLP (deep packet inspection/data leakage prevention) technology to monitor the secrets in use. Here is a detailed example of how to do this:

  • Define a AWS secret key as a NeuVector DLP sensor.
  • Apply this sensor to the services that which require AWS access. For example, this Nginx service will have access to AWS services and it will use AWS access keys.
  • Just like that, the DLP rule will be applied and NeuVector will start monitoring the Nginx network communications. Whenever there’s an AWS secret being transferred in the network packets, NeuVector will trigger an alarm then take appropriate actions on it.

Note: the Regex patterns in these examples are for demo purposes only. Like similar IDS/IPS or scanning solutions, pattern based scanning may generate false positives. NeuVector recommends that users design the Regex patterns to achieve the best match of real key, in order to get best results. For example, use some digits from the actual secrets, or add keywords found in the keys etc., which can increase accuracy and efficiency.

NeuVector’s behavior based security protection is the foundation to protect any known or unknown security threats at runtime. In addition to those protections, these additional DPI/DLP capabilities provide the industry’s only “security in depth” protection for mission critical services.

When combined with the existing platform or 3rd party secrets management security tools, secrets can be secured during the entire container lifecycle.

Increasing Cloud and Container Applications Require Secrets Protection

Since March of this year 2020, the coronavirus pandemic and “shelter in place” orders have caused a 30x growth of the daily Zoom meetings. With the popularity of Zoom and other remote access work applications growing, security practices are being constantly challenged. All these Cloud workloads and services require a strong “defense in depth” security platform to provide strong protections for production environments. And only NeuVector is able to inspect and block network transmissions that contain secrets which thieves are trying to steal.

Never let security issues slow down your business growth!

Your chance to learn “All You Need to Know” about SAP High Availability

Monday, 1 June, 2020

As businesses adapt to the current economic conditions, it’s increasingly important that IT organizations minimize downtime of mission-critical SAP systems. Unplanned outages and manual failover processes are disruptive, labor-intensive, and time-consuming. Every minute that supply chain, finance, or BPM operations are not available results in lost productivity for lines of business and more pressure on the IT staff. SUSE has solutions to help you maintain your critical on-site operations and support remote workers in times of crisis.

On June 17 at 11:00 am US Eastern Time / 5:00 pm CET, I will be hosting the webinar “SAP High Availability on SUSE: All You Need to Know”. This technical webinar is for anyone who is currently running or planning to implement SAP NetWeaver or S/4HANA infrastructures and wants to reduce system downtime. Topics that the speakers will cover include:

  • Architectures of the main SAP high availability scenarios
  • High availability with SAP S/4HANA enqueue
  • SAP HANA System Replication
  • Options and deployment of high availability for SAP NetWeaver, SAP HANA and S/4HANA
  • Cluster tooling

The presenters for this 1-hour webinar are:

Fabian Herschel, SUSE Distinguished Architect, SAP LinuxLab

Lars Pinne, SUSE Information Systems Architect, SAP LinuxLab

This is a great opportunity for you to get the answers to your questions about building and maintaining a resilient SAP infrastructure.

To register for this webinar, click here and be sure to invite your colleagues.

If you’re interested in these topics but aren’t available for the live session, register anyway. You will receive a follow-up email with a link to the replay.

Please follow me on Twitter @MichaelDTabron.

Maintaining SUSE Linux support during the pandemic

Friday, 29 May, 2020

 

The global pandemic and resulting government shelter-in-place or quarantine measures to limit the spread of the COVID-19 virus have shifted the priorities of IT organizations away from non-critical maintenance and upgrades. Unfortunately, the planned end of General Support date for SUSE Linux Enterprise Server (SLES) 12 Service Pack 4 happens to be in the middle of this crisis. At SUSE, we understand the strain the current environment is putting on your IT operations so we have an option to help you keep your systems supported and secure.

General Support for SLES 12 SP4 ends on June 30, 2020. Normally, organizations would either upgrade to a SLES service pack/version that still has full support or purchase up to 3 years of Long Term Service Pack Support (LTSS). Available today, organizations with current subscriptions of SUSE Linux Enterprise Server 12 SP4 are eligible to receive continued access to patches and updates in the LTSS repositories free of charge for 3 months starting July 1, 2020, through September 30, 2020. Platforms included in this offer are x86-64 and IBM Z/LinuxOne. This gives IT teams more time to complete upgrade plans and evaluations at a time when staffing is limited and the focus is on keeping the business operational.

To take advantage of this option, you must contact your SUSE sales representative, reseller, or platform partner so that SUSE can update your subscription information. To see a complete list of General Support and LTSS dates go to www.suse.com/lifecycle.

As always, you can follow me on Twitter @MichaelDTabron.

 

Leading A New Learning Landscape

Thursday, 28 May, 2020

The last few months have forced everyone to look differently at how we communicate, educate, socialize and do business. The way people learn and teach might never quite be the same, so we are here to support any social changes and physical adjustments our societies force us to take. We wanted to highlight a few ways SUSE is already adapting and providing new virtual learning experiences.

SUSECON Digital

Last week marked the launch of SUSECON Digital, a newly re-imagined event featuring technical seminars and keynote talks from industry leaders such as Accenture, Microsoft and SAP. One of the most inspiring messages actually came from our CEO, Melissa Di Donato, about supporting academia during these challenging times.

University of Erfurt: One of the oldest universities in Germany wanted to digitize their massive amount of educational resources for student use remotely. They’ve choose SUSE Enterprise Storage to expand internet based access to more than 500 databases and 9000 online journals and digital publications.

Read the success story: https://www.suse.com/c/success/university-of-erfurt/

Going digital with SUSECON also opens the door for everyone to participate. Regardless if you are an open source practitioner, student or just want to know what this Open Source world is all about, we encourage you to see for yourself all SUSECON has to offer.

While we don’t have the luxury of enjoying a drink together in person, you can still network directly with hundreds of experienced industry professionals, play games and even take a much needed coffee break.  SUSECON will add an additional 30 sessions per week on May 27, June 3 and June 10, so be sure to return weekly to see what’s new! All of the content in the SUSECON experience will remain available through September 19.

For more information visit: SUSECON Digital 2020.

Developer Community

Kubernetes, SUSE, Cloud Foundry

SUSE recently launched a new developer community, a cool new place for people to build, test and run apps on a fully automated cloud platform. We wanted to make it easy for developers to benefit from the best practices that have evolved from the cloud native application delivery and provide a place to create applications that can run anywhere.

One of the best “perks” of this community, is free access to the Cloud Application Platform Developer Sandbox. In the sandbox, you are able to build practically… anything! It comes with a “Getting Started Guide” which walks you through the basics of the Cloud Foundry command line interface and the browser-based ghraphical user interface Stratos. Whether you are a student who is interesting in exploring what life is like as a developer or a veteran in the game, the sandbox can be place to experiment with random ideas and see if they actually work.

Hear from the experts themselves (Webinar): https://www.brighttalk.com/webcast/11477/401277

For more information visit: https://www.explore.suse.dev/

Academic Community

As you would imagine, demand has gone through the roof for virtual education and alternative learning solutions. One of the best parts about SUSE’ Academic Program is it’s already 100% digital. The SUSE Academic Program has always been open and freely accessible from any device, anywhere and at any time. A simple internet connection provides students, faculty and academics access to our library of online technical training courses, virtual lab environments, curriculum and more. If you haven’t already, we encourage you to check it out and become part of this community at no-cost.

For more information visit: https://www.suse.com/academic/

Building Communities

It’s during these times where building each other up is more important than ever. As our societies prepare to enter a “new normal”, all with different roadmaps, it’s critical we support each other as one greater global community.

As the largest independent Open Source software organization in the world, it is our corporate social responsibility to share our expertise, technology and innovations with the academic community. We look forward to embracing these new challenges and blazing a new path to learning for the future.

Help Us Help You

While no one yet knows for certain what the future holds, keeping an open mind is critical for navigating a new learning landscape. Feel free to send any suggestions how SUSE can better support students and our academic communities to brendan.bulmer@suse.com .