Linux Has Won: Lessons from Hosting a Panel with Industry Leaders

Wednesday, 5 April, 2017

The following article has been contributed by Markus Feilner, Team Lead Documentation at SUSE. 

 

 

Some days ago at CeBIT, I was once more allowed to host a panel discussion with industry leaders. And not surprisingly, the topic was: Enterprise Open Source.

But something was different this year: There was no objection any more. We had some 200 guests in the audience, all of them IT, not all of them Open Source, some stayed from the talk before (John Maddog Hall), some came early because of Klaus Knopper presenting after us, but no one knew any argument against Linux or Open Source in Enterprise. Folks, we have arrived, we are mainstream. If there has been any doubt, for me that has been resolved by now.

Industry Leaders on the Panel

The topic was “Open Source in enterprise usage” (Open Source im praktischen Unternehmenseinsatz), and we had guests from the automobile manufacturing branch, the German Chamber of Commerce, a huge insurance company, and two Linux companies who collaborated in setting up their environment with 12,000 Asterisk users and Linux desktops.

I was happy to play a mixed role. I was the host, but I could also represent both SUSE – who invented Enterprise Linux and brought it to market in 2000 – and open source / Linux journalism. Ingo Grothues of LVM insurances reported that they use Ubuntu on their desktops, and Asterisk for thousands of road warriors and office workers. On top of that, recent numbers show that Linux (well okay, of course including Android) now has more desktop devices than Microsoft Windows.

CeBIT-2017

From left to right on the photo you will find Christopher Krause, from the “Competence Center IT in German Handwerk” of the Chamber of Commerce, speaking for 1.5 million small- and medium-sized companies. Next to him is Roland Stahl of NagelCarGroup. Stahl could tell about the typical needs of thousands of companies that deal with the car manufacturing sector. To my right there’s Ingo Grothues of LVM Insurances, Kai-Oliver Detken from DECOIT GmbH, and Alfred Schroeder of Gonicus. These three gentlemen set up the Linux/ Asterisk migration for the insurance company.

10 Results

Following the thread of the discussion that we had prepared, we had a very constructive discussion, and we agreed on the following ten bullet points:

1) Open Source Software (OSS) and Open Standards are everywhere. No modern enterprise can do without them, regardless of size.

2) For every enterprise business requirement there is an OSS solution. However, there may not always be a solution available that will fit all needs completely. With OSS the situation has matured into exactly the same offers and problems as with commercial proprietary software.

3) In the same way a very good piece of advice applies to OSS: “Choose big and mature products” like Apache, Samba, LibreOffice, Agorum DMS, Asterisk – projects that have been around for a long time will not suddenly disappear, because too many users depend on them, especially if the projects have an enterprise approach. A company may stop a product, but with open source you still may be able to carry on as so many do, e.g. KDE3 kiosk mode.

4) In migrations, technical matters are mostly solvable. The real challenges (see Munich) are of personal, social, political, or pedagogical nature.

5) License questions or worries do not affect users. But, as OSS users will soon start to improve the software and tend to become developers or contributors, the licensing topic becomes more and more interesting over time.

6) Small to medium enterprises and small craft owners/leaders (typical German “Handwerksbetriebe”) are now run by a different generation. These young owners/leaders are ready to accept communities and open source, unlike their predecessors. They make their decisions by looking at their competition and tend to use whatever has proven successful. Wait – did somebody mention “The Cathedral and the Bazaar”? Yes, that model seems to be a standard now within German Handwerk IT.

7) Most companies (no matter what size) do not care about open or closed source, they just want something that works and is sustainable.

8) There is a growing number of companies that offer Linux/ OSS support everywhere. Their services are not more expensive than others. In fact, small and medium size consultants nowadays have to offer Linux, LAMP, database, or other OSS support if they want to compete successfully.

9) German politics and state administration are far behind the current status, while other countries are much farther ahead. Of course this is not true for the IT departments. OSS is everywhere in German administration because the admins want it and deploy it, often cloaked and without consent (like the hidden Apache/Nginx server in the backend or Firefox/ Thunderbird on Windows desktops or the business Android phones), sometimes all the way through procurement, which is more difficult for OSS in Germany than in other countries.

This is also due to the fact that Open Source and Open Standards have less marketing and lobbying power, especially compared to big American corporations whose lobbyists are said to be omnipresent in parliaments and politician’s offices.

10) The lack of understanding OSS and open standards in the German political class and upper management often makes the usage of vendor-lock-in-free OSS more difficult than necessary. Working solutions are turned down, because marketing and lobbying succeed in convincing leaders contrary to what their employees and IT stafff will tell them. The Limux project of the city of Munich is a great example: There are no technical problems, says the IT department, but still the major wants to migrate back into the proprietary world.

Word of Mouth is OSS Marketing

Word of mouth between OSS users is the number one source of information, and it works. Indeed, it works very well. As said before, SMEs tend to look around and use what has proven successful in their environment, the smaller they are, the more they are likely to act like that.

OSS marketing efforts, both community and enterprise, are generally less effective because the whole market is supported mostly by SME companies (Mittelstand) who never did invest in marketing to the degree of big software corporations.

OSS Lobbyism: An Unbalanced Situation

These two factors lead to a unbalanced situation where grassroots growth makes OSS flourish despite the fact that policy is protecting closed source software.

Considering all of that, the news that the Open Source Business Alliance (OSBA) is investing in a full-time OSS lobbyist in Berlin to act as a counterpart for OSS in order to reflect the needs of the Mittelstand and SMEs may be just a beginning, but a needed one.

German Politics is Way Behind, Lobbying Still Needed

In a nutshell, there are far more Enterprise Linux users than the public knows, OSS has far more impact on companies than the media reports, while at the same time politics, lawmakers, and procurement policies are lagging behind.

 

A big thanks a lot to my former colleagues at Linux-Magazine and Computec for inviting me to host this panel. 👍

Beyond Kubernetes Container Orchestration

Thursday, 23 March, 2017

If you’re going to successfully deploy containers in production, you need more than just container orchestration

Kubernetes is a valuable tool

Kubernetes is an open-source container orchestrator for deploying and
managing containerized applications. Building on 15 years of experience
running production workloads at Google, it provides the advantages
inherent to containers, while enabling DevOps teams to build
container-ready environments which are customized to their needs.
The Kubernetes architecture is comprised of loosely coupled components
combined with a rich set of APIs, making Kubernetes well-suited
for running highly distributed application architectures, including
microservices, monolithic web applications and batch applications. In
production, these applications typically span multiple containers across
multiple server hosts, which are networked together to form a cluster.
Kubernetes provides the orchestration and management capabilities
required to deploy containers for distributed application workloads. It
enables users to build multi-container application services and schedule
the containers across a cluster, as well as manage the health of the
containers. Because these operational tasks are automated, DevOps team
can now do many of the same things that other application platforms
enable them to do, but using containers.

But configuring and deploying Kubernetes can be hard

It’s commonly believed that Kubernetes is the key to successfully
operationalizing containers at scale. This may be true if you are
running a single Kubernetes cluster in the cloud or have reasonably
homogenous infrastructure. However, many organizations have a diverse
application portfolio and user requirements, and therefore have more
expansive and diverse needs. In these situations, setting up and
configuring Kubernetes, as well as automating infrastructure deployment,
gives rise to several challenges:

  1. Creating a Kubernetes environment that is customized to the DevOps
    teams’ needs
  2. Automating the deployment of multiple Kubernetes clusters
  3. Managing the health of Kubernetes clusters (e.g. detecting and
    recovering from etcd node problems)
  4. Automating the upgrade of Kubernetes clusters
  5. Deploying multiple clusters on premises and/or across disparate
    cloud providers
  6. Ensuring enterprise readiness, including access to 24×7 support
  7. Customizing then repeatedly deploying multiple combinations of
    infrastructure and other services (e.g. storage, networking, DNS,
    load balancer)
  8. Deploying and managing upgrades for Kubernetes add-ons such as
    Dashboard, Helm and Heapster

Rancher is designed to make Kubernetes easy

Containers make software development easier by making code portable
across development, test, and production environments. Once in
production, many organizations look to Kubernetes to manage and scale
their containerized applications and services. But setting up,
customizing and running Kubernetes, as well as combining the
orchestrator with a constantly changing set of technologies, can be
challenging with a steep learning curve. The Rancher container
management platform makes it easy for you to manage all aspects of
running containers. You no longer need to develop the technical skills
required to integrate and maintain a complex set of open source
technologies. Rancher is not a Docker orchestration tool—it is the
most complete container management platform. Rancher includes everything
you need to make Kubernetes work in production on any infrastructure,
including:

  • A certified and supported Kubernetes distribution with simplified
    configuration options
  • Infrastructure services including load balancers, cross-host
    networking, storage drivers, and security credentials management
  • Automated deployment and upgrade of Kubernetes clusters
  • Multi-cluster and multi-cloud suport
  • Enterprise-class features such as role-based access control and 24×7
    support

We included a fully supported Kubernetes distro

The certified and supported Kubernetes distribution included with
Rancher makes it easy for you to take advantage of proven, stable
Kubernetes features. Kubernetes can be launched via the easy to use
Rancher interface in a matter of minutes. To ensure a consistent
experience across all public and private cloud environments, you can
then leverage Rancher to manage underlying containers, execute commands,
and fetch logs. You can also use it to stay up-to-date with the
latest stable Kubernetes release as well as adopt upstream bug fixes in
a timely manner. You should never again be stuck with old, outdated and
proprietary technologies. The Kubernetes Dashboard can be automatically
started via Rancher, and made available for each Kubernetes environment.
Helm is automatically made available for each Kubernetes environment as
well, and a convenient Helm client is included in the out-of-the-box
kubectl shell console.

We make Kubernetes enterprise- and production-ready

Rancher makes it easy to adopt open source Kubernetes while complying
with corporate security and availability standards. It provides
enterprise readiness via a secure, multi-tenant environment, isolating
resources within clusters and ensuring separation of controls. A private
registry can be configure that is used by Kubernetes and tightly coupled
to the underlying cluster (e.g. Google Cloud Platform registry can be
used only in a GCP cluster, etc.). Features such as role-based access
control, integration with LDAP and active directories, detailed audit
logs, high-availability, metering (via Heapster), and encrypted
networking are available out of the box. Enterprise-grade 24x7x365
support provides you with the confidence to deploy Kubernetes and
Rancher in production at any scale.

**Multi-cluster, multi-cloud deployments? No problem **

Kubernetes eBook
Quickly get started with Rancher and Kubernetes by following the
step-by-step instructions in the latest release of the Kubernetes
eBook
.
Rancher makes it possible to run multi-node, multi-cloud clusters, and
even deploy stateful applications. With Rancher, Kubernetes clusters
can span multiple resource pools and clouds. All hosts that are added
using Docker machine drivers or manual agent registration will
automatically be added to the Kubernetes cluster. The simple to use
Rancher user interface provides complete visibility into all hosts, the
containers running in those hosts, and their overall status.

But you need more than just container orchestration…

Kubernetes is maturing into a stable platform. It has strong adoption
and ecosystem growth. However, it’s important not to lose sight that
the end goal for container adoption is to make it easier and more
efficient for developers to create applications and for operations to
manage them. Application deployment and management requires more than
just orchestration. For example, services such as load balancers and
DNS are required to run the applications.

Customizable infrastructure services

The Rancher container management platform makes it easy to define and
save different combinations of networking, storage and load balancer
drivers as environments. This enables users to repeatedly deploy
consistent implementations across any infrastructure, whether it is
public cloud, private cloud, a virtualized cluster, or bare-metal
servers. The services integrated with Rancher include:

  • Ingress controller with multiple load balancer implementations
    (HAproxy, traefik, etc.)
  • Cross-host networking drivers for IPSEC and VXLAN
  • Storage drivers
  • Certificate and security credentials management
  • Private registry credential management
  • DNS service, which is a drop-in replacement for SkyDNS
  • Highly customizable load balancer

If you choose to deploy an ingress controller on native Kubernetes, each
provider will have its own code base and set of configuration values.
However, Rancher load balancer has a high level of customization to meet
user needs. The Rancher ingress controller provides the flexibility to
select your load balancer of choice—including HAproxy, Traefik, and
nginx—while the configuration interface remains the same. Rancher also
provides the ability to scale the load balancer, customize load balancer
source ports, and schedule the load balancer on a specific set of hosts.

A complete container management platform

You’ve probably figured this out for yourself by now but, to be clear,
Rancher is NOT a container orchestrator. It is a complete container
management platform that includes everything you need to manage
containers in production. You can quickly deploy and run multiple
clusters across multiple clouds with a click of the button using Rancher
or select from one of the integrated and supported container
orchestrator distributions, including Kubernetes as well as Mesos,Docker
Swarm and Windows. Pluggable infrastructure services provide the basis
for portability across infrastructure providers Whether running
containers on a single on-premises cluster or multiple clusters running
on Amazon AWS and other service providers, Rancher is quickly becoming
the container management platform of choice for thousands of Kubernetes
users.

Get started with containers, Kubernetes, and Rancher today!

For step-by-step instructions on how to get started with Kubernetes
using the Rancher container management platform, please refer to the
Kubernetes eBook, which is available
here. Or,
if you are heading to KubeCon 2017 in Berlin, stop by booth S17 and we
can give you an in-person demonstration. Louise is the Vice
President of Marketing at Rancher Labs where she is focused on defining
and executing impactful go-to-market strategy and marketing programs by
analyzing customer needs and market trends. Prior to joining Rancher,
Louise was Marketing Director for IBM’s Software Defined Infrastructure
portfolio of big data, cloud native and high performance computing
management solutions. Before the company was acquired by IBM in 2012,
Louise was Director of Marketing at Platform Computing. She has 15+
years of marketing and product management experience, including roles at
SGI and Sun Microsystems. Louise holds an MBA from Santa Clara
University’s Leavey School of Business and a Bachelor’s degree from
University of California, Davis. You can follow Louise in Twitter
@lwestoby.

Rancher Labs and NeuVector Partner to Deliver Management and Security for Containers

Tuesday, 21 March, 2017

DevOps can now efficiently and securely deploy containers for enterprise applications

As more enterprises move to a container-based application deployment
model, DevOps teams are discovering the need for management and
orchestration tools to automate container deployments. At the same time,
production deployments of containers for business critical applications
require specialized container-intelligent security tools.

To address
this, Rancher Labs and NeuVector today
announced

that they have partnered to make container security as easy to deploy as
application containers. You can now easily
deploy
the NeuVector container network
security solution with the Rancher container management platform. The
first and only container network security solution in the
Rancher application catalog, the addition
of NeuVector provides simple deployment of the
NeuVector containers into an enterprise container environment. NeuVector
secures containers where they have been most vulnerable: in production
environments where they are constantly being deployed, updated, moved,
and scaled across hosts and data centers. With constant behavioral
learning automatically applied to security policies for containers, the
NeuVector container network security
solution
delivers multi-layered
protection for containers and their hosts. Protection includes violation
and threat detection, vulnerability scanning, and privilege escalation
detection for hosts and containers. With one click in the Rancher
console, users can choose to deploy the NeuVector containers. Sample
configuration files are provided, and minimal setup is required before
deployment. Once the NeuVector containers are deployed, they instantly
discover running containers and automatically build a whitelist based
policy to protect them. Like Rancher, NeuVector supports cross host,
data center, and cloud deployments, relieving DevOps teams of
error-prone manual configurations for mixed environments.
Deploy the NeuVector security containers with a click of a button. View
the demo
. In addition to
production use, NeuVector is also valuable for debugging of application
connections during testing, and can be used after violations are
detected for forensic investigation. A convenient network packet capture
tool assists with investigations during test, production, and incident
management. Henrik Rosendahl is Head of Business Development for
NeuVector. He is a serial enterprise software
entrepreneur and was the co-founder of CloudVolumes – named one of Five
Strategic Acquisitions That Reshaped VMware by Forbes. He is a frequent
speaker at VMworld, SNW, CloudExpo, and InterOp.

Tags: ,, Category: Uncategorized Comments closed

DevOps and Containers, On-Prem or in the Cloud

Tuesday, 14 March, 2017

The cloud vs.
on-premises debate is an old one. It goes back to the days when the
cloud was new and people were trying to decide whether to keep workloads
in on-premises datacenters or migrate to cloud hosts. But the Docker
revolution has introduced a new dimension to the debate. As more and
more organizations adopt containers, they are now asking themselves
whether the best place to host containers is on-premises or in the
cloud. As you might imagine, there’s no single answer that fits
everyone. In this post, we’ll consider the pros and cons of both cloud
and on-premises container deployment and consider which factors can make
one option or the other the right choice for your organization.

DevOps, Containers, and the Cloud

First, though, let’s take a quick look at the basic relationship
between DevOps, containers, and the cloud. In many ways, the combination
of DevOps and containers can be seen as one way—if not the native
way—of doing IT in the cloud. After all, containers maximize
scalability and flexibility, which are key goals of the DevOps
movement—not to mention the primary reasons for many people in
migrating to the cloud. Things like virtualization and continuous
delivery seem to be perfectly suited to the cloud (or to a cloud-like
environment), and it is very possible that if DevOps had originated in
the Agile world, it would have developed quite naturally out of the
process of adapting IT practices to the cloud.

DevOps and On-Premises

Does that mean, however, that containerization, DevOps, and continuous
delivery are somehow unsuited or even alien to on-premises deployment?
Not really. On-premises deployment itself has changed; it now has many
of the characteristics of the cloud, including a high degree of
virtualization, and relative independence from hardware constraints
through abstraction. Today’s on-premises systems generally fit the
definition of “private cloud,” and they lend themselves well to the
kind of automated development and operations cycle that lies at the
heart of DevOps. In fact, many of the major players in the
DevOps/container world, including AWS and Docker, provide strong support
for on-premises deployment, and sophisticated container management tools
such as Rancher are designed to work seamlessly across the
public/private cloud boundary. It is no exaggeration to say that
containers are now as native to the on-premises world as they are to the
cloud.

Why On-premises?

Why would you want to deploy containers on-premises? Local Resources
Perhaps the most obvious reason is the need to directly access and use
hardware features, such as storage, or processor-specific operations.
If, for example, you are using an array of graphics chips for
matrix-intensive computation, you are likely to be tied to local
hardware. Containers, like virtual machines, always require some degree
of abstraction, but running containers on-premises reduces the number of
layers of abstraction between the application and underlying metal to a
minimum. You can go from the container to the underlying OS’s hardware
access more or less directly—something which is not practical with VMs
on bare metal, or with containers in the public cloud. Local
Monitoring
In a similar vein, you may also need containers to monitor,
control, and manage local devices. This may be an important
consideration in an industrial setting, or a research facility, for
example. It is, of course, possible to perform monitoring and control
functions with more traditional types of software—The combination of
containerization and continuous delivery, however, allows you to quickly
update and adapt software in response to changes in manufacturing
processes or research procedures. Local Control Over Security
Security may also be a major consideration when it comes to deploying
containers on-premises. Since containers access resources from the
underlying OS, they have potential security vulnerabilities; in order to
make containers secure, it is necessary to take positive steps to add
security features to container systems. Most container-deployment
systems have built-in security features. On-premises deployment,
however, may be a useful strategy for adding extra layers of security.
In addition to the extra security that comes with controlling access to
physical facilities, an on-premises container deployment may be able to
make use of the built-in security features of the underlying hardware.
Legacy Infrastructure and Cloud Migration What if you’re not in a
position to abandon existing on-premises infrastructure? If a company
has a considerable amount of money invested in hardware, or is simply
not willing or able to migrate away from a large and complex set of
interconnected legacy applications all at once, staying on-premises for
the time being may be the most practical (or the most politically
prudent) short-to-medium-term choice. By introducing containers (and
DevOps practices) on-premises, you can lay out a relatively painless
path for gradual migration to the cloud. Test Locally, Deploy in the
Cloud
You may also want to develop and test containerized applications
locally, then deploy in the cloud. On-premises development allows you to
closely monitor the interaction between your software and the deployment
platform, and observe its operation under controlled conditions. This
can make it easier to isolate unanticipated post-deployment problems by
comparing the application’s behavior in the cloud with its behavior in
a known, controlled environment. It also allows you to deploy and test
container-based software in an environment where you can be confident
that information about new features or capabilities will not be leaked
to your competitors.

Public/Private Hybrid

Here’s another point to consider when you’re comparing cloud and
on-premises container deployment: public and private cloud deployment
are not fundamentally incompatible, and in many ways, there is really no
sharp line between them. This is, of course, true for traditional,
monolithic applications (which can, for example, also reside on private
servers while being accessible to remote users via a cloud-based
interface), but with containers, the public/private boundary can be made
even more fluid and indistinct when it is appropriate to do so. You can,
for example, deploy an application largely by means of containers in the
public cloud, with some functions running on on-premises containers.
This gives you granular control over things such as security or
local-device access, while at the same time allowing you to take
advantage of the flexibility, broad reach, and cost advantages of
public-cloud deployment.

The Right Mix for Your Organization

Which type of deployment is better for your company? In general,
startups and small-to-medium-size companies without a strong need to tie
in closely to hardware find it easy to move into (or start in) the
cloud. Larger (i.e. enterprise-scale) companies and those with a need to
manage and control local hardware resources are more likely to prefer
on-premises infrastructure. In the case of enterprises, on-premises
container deployment may serve as a bridge to full public-cloud
deployment, or hybrid private/public deployment. The bottom line,
however, is that the answer to the public cloud vs. on-premises question
really depends on the specific needs of your business. No two
organizations are alike, and no two software deployments are alike, but
whatever your software/IT goals are, and however you plan to achieve
them, between on-premises and public-cloud deployment, there’s more
than enough flexibility to make that plan work.

Tags: ,, Category: Uncategorized Comments closed

Stirred Up About Storage: Why 82% of organizations want to change their approach and move to software-defined storage

Tuesday, 7 March, 2017

According to independent research conducted for SUSE by Loudhouse, a staggering 82% of organizations across the entire world, from Indiana to India and Europe to Eurasia have their approach to storage under the microscope – driven to make changes by the fear that their business’ growth will be choked by rising data volumes.

So what’s got – well pretty much everyone – so stirred up about storage? Let’s start with the blindingly obvious: data volumes are increasing, a lot, over two thirds of storage decision makers expect storage to grow by 30% in the next 12 months, in a remarkably similar pattern for the entire world. North Americans top the chart, followed by Indians, and in the EU where – for the time being at least – the UK tops the demand scale.

The other top concerns? More than half of storage decision makers want to reduce complexity. All those different business applications, with their differing associated storage, built up over time in a hotchpotch of separate business units, departments and operating companies, silo piled on silo. Just short of half of decision makers are getting headaches from cost (I don’t think I’d like to explain a 30% increase even to the most sympathetic boss, year after year). And coming in third, we have issues with facilitating better working practices in the enterprise with 45% stating that helping the organization get better at collaboration, innovation and flexibility is a key priority.

It stands to reason, with all that increasing cost, and with collaboration and innovation as top priorities that this isn’t a problem that only concerns the IT team. Far from it. The business wants to be more agile in the digital age. And its turning up the heat on the IT team to deliver systems that support faster decision making and innovation. Small wonder then, that more than two thirds of storage decision makers report increased pressure from the business over the last couple of years.

The business is not going to leave IT alone when it comes to storage, it’s going to push, and push and push, demanding performance. In a world where digital innovation is competitive edge, and competitive edge comes from the exploitation of data, how and where that data is stored and how quickly it can be access and analyzed defines business success.

Fear of slowing down digital transformation affects over 90% of storage decision makers everywhere except China and the Nordics. Perhaps in China – where the top 50 enterprises are all owned by the state – money isn’t so much of a concern!

Software-defined storage, with its capacity to reduce costs and complexity, deal with rising volumes, and provide a long term strategy for the future, is unsurprisingly seen by the vast majority of decision makers as part of the future.

Over three quarters of storage decision makers worldwide see the business case as ‘compelling’

So, is it any wonder that nearly two thirds of you are going to adopt software defined storage in the next 12 months? I really don’t think it is. But isn’t it time you turned desire into action and kick-started your SDS strategy? The problems with volumes, the challenges of digital transformation and the scrutiny of the wider business are not going to go away – start today: we’re here to help.

The four things that enterprises hate most about storage

Wednesday, 1 March, 2017

When you think about spending money on your home, you think about things that might make life easier and more enjoyable: the extension on the kitchen that means the entire family can get round the table at Christmas – even the in-laws; the extra bedroom, and the privacy-affording en-suite bathroom. This kind of spending is exciting because it makes life better: you and your spouse sit together of an evening and actually enjoy planning the works. There’s another sort of work in the home though, equally complicated, and necessary, yet somehow simply not satisfying.

This is the horrible truth that your roof has had its day, and will need complete – and expensive – refit. It is the central heating boiler that has keeled over and died, leaving you with no choice but to cough up for a replacement. Unsurprisingly, we don’t like this kind of spending, it is ‘dead’ money that does nothing to improve our lives and merely sustains us in our present condition. You might sit around and plan the works with your spouse. . . . But this time you won’t be doing it with a glass of wine in hand and the excited look has gone from your faces.

When it comes to improving the enterprise, storage spending has the status of roof works – no matter how elegant the engineering, they are seldom a source of happiness. They are a ‘sink’ cost, something you must do to keep the place running. So, perhaps, it’s really not surprising that hate #1 when it comes to storage is cost. In an independent survey conducted by Loudhouse for SUSE, 80% of over 1200 storage decision makers world-wide cited the cost of storage as their top frustration. We don’t like paying for it, but we pay through our noses for it: storage accounts for a whopping 7% of IT spending.

Coming a close second at 74%, hate #2 is performance. It is bad enough that the enterprising householder has to spend all that cash on things that don’t really improve the bottom line, when you lay out the money but you still don’t get the performance, it is like replacing the roof only to find that it still leaks.

Hate #3 is complexity. So, you’re planning works that you didn’t want to do, that add nothing to your happiness, and then you find out it is going to be hard work. Really hard work. You thought the roof was one single piece of work, turns out that it isn’t, that the previous owners of your house had a string of different builders in, who used different materials that – sort of – work together. There are all these gutters and pipes funnelling water this way and that instead of a single coherent structure. Fixing it is going to require a lot of thought that takes time away from other more interesting projects.

Coming in as a tie for Hate #4 are ‘inability to support innovation’ and ‘lack of agility’. You see, at some point you are going to want to do that extension, and actually do works which improve your quality of life – AKA your enterprise’s bottom line. As you set your sights on this goal, though, you don’t want to find the state of the roof is holding you back. All too often it does.

OK, so let’s review: storage is too expensive, it doesn’t perform as well as we want and need it to, it is ridiculously complicated, and it holds us back from doing valuable work. That’s quite a few reasons to hate storage, and several more to like software defined open source storage: cut costs, improve performance, reduce complexity, and free up your time to focus on things that can actually improve the business.

Further reading on the research at  suse.com/stateofstorage

Enjoy!!

Press Release

 

There and Back Again – Meeting the Beijing Team

Tuesday, 28 February, 2017

This guest article has been contributed by Tanja Roth, Technical Writer at the SUSE Documentation Team.

 

 

 

At SUSE, we have a long tradition of bringing together people from different locations and cultural backgrounds – open source simply is in our genes!

First Cross-cultural Challenge

When I started to work on the documentation for SUSE Linux Enterprise High Availability Extension in 2008, I became part of a multi-national and virtual team that works across different time zones. The initial contact to the Chinese colleagues in the team was established via the project’s mailing list. As trivial as this might seem, it may hold the first (cross-cultural) challenge as you want to address your colleagues correctly. But how to tell which part of a Chinese name represents the first name and which part belongs to the
family name? Traditionally, the sequence of names in China is family name, followed by the generation name (if any) and the given name(s). The mystery was solved soon: in our company address book, the sequence of the names follows the English tradition: the given name first, followed by the family name.

Only two or three years later, I met some of my colleagues from China (and from around the world) for the first time during one of the famous SUSE summer events in the Czech Republic. It was (and always is) a pleasure to finally put a face to a name – especially if you have been working together for some time already, but only know each other from mail,
instant messaging or phone conferences.

The R&D Exchange Program

To foster even more cross-team and cross-site networking between multiple SUSE locations, the SUSE Research & Development (R&D) department offers an exchange program: Each quarter several R&D employees can be nominated for the exchange program. After a specified process, they can visit a SUSE location of their choice for 2-3 weeks. Originally, the exchange program was a possibility for the Chinese colleagues to visit and meet their colleagues from other SUSE locations in person. Fortunately, the program has been extended meanwhile. To my surprise, I was among the lucky ones last year :). I was looking forward to visiting the Beijing office and to working from there for
two weeks!

The Preparations

It took some time to get the paperwork done (apply for the visa, book flight and accommodation etc.). After discussing with the Chinese colleagues which topics we wanted to work on during my time in Beijing, I was ready to leave in September 2016 – equipped with helpful tips by other colleagues from Europe who had already been to Beijing, a guidebook on what to see and do in Beijing, and a handful of Chinese words and phrases that I tried to learn in the weeks before I left. 😉

The Beijing Office

12 hours after leaving Nuremberg in the afternoon, I arrived at Beijing Capital International Airport around 11:20 am local time the next day. From there, it was only a short trip to the Central Business District (CBD) of Beijing. The office is located next to the East 3rd Ring Road (one of the seven ring roads in this city that holds a total population of more than 21,700,000 inhabitants). This area belongs to the Chaoyang District, where also many foreign embassies can be found. The surroundings where very impressive – as was the view from the office on the 36th floor, next to the CCTV Headquarters (China Central Radio
and Television Tower).

 

 

As I already knew some colleagues from previous meetings in Europe, I recognized some familiar faces among a bunch of new faces that I was introduced to. The office in Beijing currently consists of 11 teams, among them Research & Development, Sales, and Customer Care. Some of the Chinese colleagues use English names as first name (in addition to their
Chinese given name), which makes it easier for foreign visitors to remember and pronounce the names. Chinese is a tonal language, which means that many words are differentiated solely by tone (pitch). Thus, trying to pronounce Chinese words and names can be a challenge as the difficulty is to use the right tone for each syllable. 😉

With me being a member of the SUSE documentation team, the main topic during my stay in China was to share knowledge about writing skills in general and technical writing in particular. In the first week, I gave a general introduction to the SUSE documentation team during a Lunch and Learn session. I presented the team members and tasks, plus the processes and tools we use in our daily work. During the second week, I gave writing trainings for individual teams, each including a hands-on session with an example text. The task was to analyze the text for typical issues, and to restructure and rewrite it according to the principles covered in the first part of the training. All teams came up with good proposals on how to improve the text.

On the last day of my stay in the Beijing office, I joined the openSUSE Leap 42.2 Beta pizza party, after installing and testing the Leap Beta version and after reporting some bugs.

Major Learnings Along the Way

Talking about food: Food is an important part of Chinese culture. In China, you traditionally have three warm meals a day: breakfast, lunch, and dinner. So during the two weeks in Beijing, I had plenty of opportunities to try the local cuisine, which substantially differs from the “Chinese food” you get in Europe. The habit of sharing dishes and the huge variety of flavors that will be selected and combined for each meal is a wonderful experience! One of the highlights was a traditional Hot Pot dinner with hand-crafted rice noodles – similar to what you can observe in this video.

Another thing that never ceased to amaze me was the hospitality and friendliness of the people I encountered (even outside of the ‘SUSE family’). The colleagues in the office were really kind and helped me with a lot of things, for example, with buying a Beijing Transportation Smart Card, which you can use on the subway and city buses. The subway in Beijing is the fastest means of transportation and a good way to avoid frequent traffic jams. It is also an excellent example of how to organize public transportation in a way which makes it easy to use for foreign visitors who might not be able to read or understand the local language!

On the week-ends, I went to see some major historic sites in and around Beijing like the Forbidden City, the Summer Palace, the Olympic park or the National Museum of China. I also enjoyed going to the parks or public places where you can listen to music performances, see (or join) people doing Taijiquan (tai chi) or ballroom dancing. As some parts of the Great Wall of China can be reached by a 60-70 minutes drive from Beijing, the
colleagues were so kind to also organize a trip to this famous building on the week-end. I could not have imagined how steep some parts of the wall are. Therefore the term “climbing the Great Wall” is more than justified – breathtaking and unforgettable!

Back Again

The two weeks spun way much too fast, but I still treasure my stay in China. It was a fascinating experience in many respects! Even after being back, I kept some habits that I adopted during my time there, like drinking hot water throughout the day. (Hot water supplies are available in many public places in China like the airports or the train
stations, for example).

From my point of view, the exchange program is a great opportunity to establish personal relationships and trust, which helps to create mutual understanding and better collaboration. I still feel blessed that I had the chance to be with the colleagues in Beijing for two weeks and I can only recommend to get to know them in person – they are really amazing!

Containers and Application Modernization: Extend, Refactor, or Rebuild?

Monday, 27 February, 2017

Technology is a
constantly changing field, and as a result, any application can feel out
of date in a matter of months. With this constant feeling of impending
obsolescence, how can we work to maintain and modernize legacy
applications? While rebuilding a legacy application from the ground up
is an engineer’s dream, business goals and product timelines often make
this impractical. It’s difficult to justify spending six months
rewriting an application when the current one is working just fine, code
debt be damned. Unfortunately, we all know that product development is
never that black and white. Compromises must be made on both sides of
the table, meaning that while a complete rewrite might not be possible,
the long-term benefits of application modernization efforts must still
be valued. While many organizations don’t have the luxury of building
brand new, cloud-native applications, there are still techniques that
can be used to modernize existing applications using container
technology like Docker. These modernization techniques ultimately fall
into three different categories: extend, refactor, and rebuild. But
before we get into them, let’s first touch on some Dockerfile basics.

Dockerfile Basics

For the uninitiated, Docker is a containerization platform that “wraps
a piece of software in a complete filesystem that contains everything
needed to run: code, runtime, system tools, system libraries” and
basically everything that can be installed on a server, without the
overhead of a virtualization platform. While the pros and cons of
containers are out of the scope of this article, one of the biggest
benefits of Docker is the ability to quickly and easily spin up
lightweight, repeatable server environments with only a few lines of
code. This configuration is accomplished through a file called the
Dockerfile, which is essentially a blueprint that Docker uses to build
container images. For reference, here’s a Dockerfile that spins up a
simple Python-based web server (special thanks to Baohua
Yang
for the awesome example):

# Use the python:2.7 base image
 FROM python:2.7

# Expose port 80 internally to Docker process
 EXPOSE 80

# Set /code to the working directory for the following commands
 WORKDIR /code

# Copy all files in current directory to the /code directory
 ADD . /code

# Create the index.html file in the /code directory
 RUN touch index.html

# Start the python web server
 CMD python index.py

This is a simplistic example, but it does a good job of illustrating
some Dockerfile basics, namely extending pre-existing images, exposing
ports, and running commands and services. Even these few instructions
can be used to spin up extremely powerful microservices, as long as the
base source code is architected properly.

Application Modernization

At a high level, containerizing an existing application is a relatively
straightforward process, but unfortunately not every application is
built with containerization in mind. Docker has an ephemeral filesystem,
which means that storage within a container is not persistent. Any file
that is saved within a Docker container will be lost unless specific
steps are taken to avoid this. Additionally, parallelization is another
big concern with containerized applications. Because one of the big
benefits of Docker is the ability to quickly adapt to increasing traffic
requirements, these applications need to be able to run in parallel with
multiple instances. As mentioned above, in order to prepare a legacy
application for containerization, there are a few options available:
extend, refactor, or rebuild. But which solution is the best depends
entirely on the needs and resources of an organization.

Extend

Extending the existing functionality of a non-containerized application
often requires the least amount of commitment and effort on this list,
but if it isn’t done right, the changes that are made can lead to
significantly more technical debt. The most effective way to extend an
existing application with container technology is through microservices
and APIs. While the legacy application itself isn’t being
containerized, isolating new features into Docker-based microservices
allows for the modernization of a product, and at the same time tees the
legacy code up for easier refactoring or rebuilding in the future.

At a high level, extension is a great choice for applications that are
likely to be rebuilt or sunset at some point in the not-too-distant
future—but the older the codebase, the more it might be necessary to
completely refactor certain parts of it to accommodate a Docker
platform.

Refactor

Sometimes, extending an application through microservices or APIs isn’t
practical or possible. Whether there is no new functionality to be
added, or the effort to add new features through extension is too high
to justify, refactoring parts of a legacy codebase might be necessary.
This can be easily accomplished by isolating individual pieces of
existing functionality from the current application into containerized
microservices. For example, refactoring an entire social network into a
Docker-ready application might be impractical, but pulling out the piece
of functionality that runs the user search engine is a great way to
isolate individual components as separate Docker containers.

Another great place to refactor a legacy application is the storage
mechanism used for writing things like logs, user files, etc. One of the
biggest roadblocks to running an application within Docker is the
ephemeral filesystem. Dealing with this can be handled in one of a few
ways, the most popular of which is through the use of a cloud-based
storage method like Amazon S3 or Google Cloud Storage. By refactoring
the file storage method to utilize one of these platforms, an
application can be easily run in a Docker container without losing any
data.

Rebuild

When a legacy application is unable to support multiple running
instances, it might be impossible to add Docker support without
rebuilding it from the ground up. Legacy applications can have a long
shelf life, but there comes a point when poor architecture and design
decisions made in the early stages of an application can prevent
efficient refactoring of an application in the future. Being aware of
impending development brick walls is crucial to identifying risks to
productivity.

Ultimately, there is no hard rule when it comes to modernizing legacy
applications with container technology. The best decision is often the
one that is dictated by both the needs of the product and the needs of
the business, but understanding how this decision affects the
organization in the long run is crucial to ensuring a stable application
without losing productivity.

To learn more about using containers, join our February Online
Meetup: More Tips and Tricks for Running Containers Like a
Pro
,
happening Tuesday, Feb 28.

Zachary Flower (@zachflower) is a
freelance web developer, writer, and polymath. He’s built projects for
the NSA and created features for companies like Name.com and Buffer.

Tags: , Category: Uncategorized Comments closed

Playing Catch-up with Docker and Containers

Friday, 17 February, 2017

This article is essentially a guide to getting started with Docker for
people who, like me, have a strong IT background but feel a little
behind the curve when it comes to containers. We live in an age where
new and wondrous technologies are being introduced into the market
regularly. If you’re an IT professional, part of your job is to identify
which technologies are going to make it into the toolbox for the average
developer, and which will be relegated to the annals of history. Docker
is one of those technologies that sounded interesting when it first
debuted in 2013, but was easy to ignore because at the time it was not
clear whether Docker would ever graduate beyond something that
developers liked to play with in their spare time. Personally, I didn’t
pay close attention to Docker containers in Docker’s early days. They
got lost amid all the other noise in the IT world. That’s why, in 2016,
as Docker continued to rise in prominence, I realized that I’d missed
the container boat. Docker was becoming a must-know technology, and I
was behind the curve. If you’re reading this, you may well be in a
similar position. But there’s good news:
Register now for free online training on deploying containers with
Rancher Container technology, and Docker specifically, are
not hard to pick up and learn if you already have a background in IT.

Sure, containers can be a little scary when you’re first getting
started, just like any new technology. But rest assured that it’s not
too late to get on the container train, even if you weren’t writing
Docker files back in 2013. I’ll explain what Docker is and how container
technology works, then go through the first steps in setting Docker up
on your workstation and getting a container running that you can
interact with. Finally, I’ll direct you to some of the resources I used
to familiarize myself with Docker, so you can continue your journey.

What is Docker and How Does it Work?

Docker is technology that allows you to create and deploy an application
together with a filesystem and everything needed to run it. The Docker
container, as it is called, can be installed on any machine, as long as
the Docker engine has been installed, and can be expected to always run
in the same manner. A physical machine with the Docker Engine installed
can host multiple Docker containers, each sharing the resources of the
host machine. You may already be familiar with machine virtualization,
either as a result of running local virtual machines using VMware on
your workstations, or interacting with cloud services like Amazon Web
Services or Microsoft Azure. Container technology is similar in some
ways, and different in others. Let’s start by comparing the two by
looking at the diagram below which shows the basic structure of a
machine hosting Docker containers, and another hosting virtual machines.
In both cases the host machine has its infrastructure and host operating
system. Virtual machines then require a hypervisor which is software or
firmware that allows virtual machines to be hosted. The virtual machines
themselves each contain their own operating system and the application,
together with its required binaries, libraries and any other
dependencies. Similarly, the machine hosting the Docker containers has
its own infrastructure and operating system. Instead of the hypervisor,
it has the Docker Engine installed, and this is what interacts with the
containers. Each container holds its application and the required
binaries, libraries and other dependencies. It is important to note that
they don’t require their own guest operating system. This allows the
containers to be significantly smaller in size, and able to be
distributed, deployed and started in a fraction of the time taken by
virtual machines.

Other key differences are that virtual machines have specifically
allocated access to the system resources, while Docker containers share
host system resources through the Docker engine.

Installing Docker and Discovering Docker Hub

I can’t think of a better way to learn about new technology than to
install it, and get your hands dirty. Let’s install the Docker Engine on
your workstation and a simple Docker container. Before we can deploy a
container, we’ll need the Docker Engine. This is the platform that will
host the container and allow it to interact with the underlying
operating system. You’ll want to pick the appropriate download from the
Docker products page, and
install it on your workstation. Downloads are available for OS X,
Windows, Linux, and a host of other operating systems. Once we have the
Docker platform installed, we’re now ready to get a container running.
Before we do that though, let’s familiarize ourselves with Docker
Hub
. Docker Hub is a central repository for
Docker Container images. Let’s pretend that you’re working on a Windows
machine, and you’d like to deploy an app on SUSE Linux. If you go to
Docker Hub, and search for OpenSuse, you’ll be shown a list of
repositories. At the time of writing there were 212 repositories listed.
You’ll want to look for the “official” repository. The official
repositories are maintained by a team of engineers sponsored by Docker.
Official repositories have clear documentation and promote best
practices. Now search for BusyBox.
Busybox is a tiny Unix distribution, which provides all of the
functionality we’ll need for this example. If you go to the official
repository, you’ll be able to read some good documentation on the image.
Let’s get a BusyBox container running on your workstation.

Getting Your First Container Running

Assuming you’ve installed the Docker Engine, open a new command prompt
on your workstation. If you’re on a Windows machine, I’d recommend using
the Docker Quick Start link which was included as part of your
installation. This will launch an interactive shell that will make it
easier to work with Docker. You don’t need this on IOS or other
Linux-based system. Enter the following command:

$ docker run -it --rm busybox

This will search the local machine for the latest BusyBox image, and
then download it from DockerHub if it isn’t found. The process should
take only a couple of seconds, and you should have something similar to
the the text shown below on your screen:

$ docker run -it --rm busybox
Unable to find image `busybox:latest` locally
latest: Pulling from library/busybox
4b0b=bc1c4050b: Pull complete
Digest: sha256”817q12c32a39bbe394944ba49de563e08f1d3c5266eb89723256bc4448680e
Status: Downloaded newer image for busybox:latest
/ #

We started a new Docker container, using the BusyBox image. We used the
-it parameters to specify that we want an interactive, pseudo TTY
session, and the –rm flag indicates that we want to delete the
container once we exit it. If you execute a command like ‘ls’ you’ll see
that you have access to a new Linux filesystem. Play around a little,
and when you’re done, enter `exit` to exit the container, and remove
it from the system. Congratulations! You’ve now created, interacted
with, and shut down your own Docker container.

Creating Your Own Docker Image

Being able to start up and close down a container is fun, but it doesn’t
have much practical use. Let’s start a new container, install something
on it, and then save it as a container for someone else to use. We’ll
start with a Debian container, install Git on it, and then save it for
later use. This time, we’ll start the container without the –rm flag,
and we’ll specify a version to use as well. Type the following into your
command prompt:

$ docker run -it debian:jessie

You should now have a Debian container running—specifically the jessie
tag/release from Docker Hub. Type the `git` command when you have the
container running. You should observe something similar to the
following:

root@4a4882a7ed59:/# git
bash: git: command not found

So it appears this container doesn’t have Git installed. Let’s rectify
that situation by installing Git:

root@4a4882a7ed59:# apt-get update && apt-get install -y git

This may take a little longer to run, but it will update the apt-get
utility, and then install Git. When it finishes up, type `git` again.
Voila! At this point, we have a container started, and we’ve installed
Git. We started the container without the –rm parameter, so when we
exit it, it won’t destroy the container. Let’s exit now. Type `exit`.
Now we want to get the ID of the container we just ran. To find this, we
type the following command:

$ docker ps -a

You should now see a list of recent containers. My results looked
similar to what’s below:

CONTAINER ID       IMAGE            COMMAND       CREATED        STATUS                          PORTS       NAMES
4a4882a7ed59       debian:jessie    “/bin/bash”   9 minutes ago  Exited (1) About a minute ago               hungry_fermet

It can be a little hard to read, especially if the results get wrapped
in your command window. What we’re looking for is the container ID,
which in my case was 4a4882a7ed59. Yours will be different, but similar
in format. Run the following command, replacing my container ID with
yours. Test:example are arbitrary names as well—Test will be the
name of your saved image, and example will be the version or tag of
that image.

$ docker commit 4a4882a7ed59 test:example

You should see a sha256 response once the container is saved. Now, run
the following to list all the images available on your local machine:

$ docker images

Docker will list the images on your machine. You should be able to find
a repository called test with a tag of example. Let’s see if it worked.
Start up your container using the following command, assuming you saved
your image with the same name and tag as I did.

$ docker run -it test:example

Once you have it running, try and execute the git command. It should
return with a list of possible options for Git. You did it! You created
a custom image of Debian with Git installed. You’re practically a Docker
Master at this point.

Following the Container Ecosystem

Using containers effectively also requires a familiarity with the trends
that are defining the container ecosystem. In 2013, when Docker debuted,
the ecosystem consisted of, well, Docker. But it has changed in big ways
since then. Orchestrators, which automate the provisioning of
infrastructure for containers, have evolved and become an essential part
of large-scale container deployment. Storage options have become more
sophisticated, simplifying the task of moving data between containers
and external, persistent storage systems. Monitoring solutions for
containers have been extended from basic tools like the Docker stats
command to include commercial monitoring and APM tools designed for
containers. And Docker now even runs on Windows as well as Linux (albeit
with some important caveats, like limited networking support at this
time). Discussing all of the container ecosystem trends in detail is
beyond the scope of this article. But in order to make the most of
containers, you should follow the news in the container ecosystem to
gain a sense of what is coming next as containers and the solutions that
support them become more and more sophisticated.

Continuing to Learn About Containers

Obviously this just scratches the surface of what containers offers, but
this should give you a good start, and afford you enough of a base of
understanding to create, modify and deploy your own containers locally.
If you would like to know more about Docker, the Web is full of useful
tutorials and additional information:

Mike Mackrory is a Global citizen who has settled down in the Pacific
Northwest – for now. By day he works as a Senior Engineer on a Quality
Engineering team and by night he writes, consults on several web based
projects and runs a marginally successful eBay sticker business. When
he’s not tapping on the keys, he can be found hiking, fishing and
exploring both the urban and the rural landscape with his kids.

Tags: , Category: Uncategorized Comments closed