Beyond Kubernetes Container Orchestration

jeudi, 23 mars, 2017

If you’re going to successfully deploy containers in production, you need more than just container orchestration

Kubernetes is a valuable tool

Kubernetes is an open-source container orchestrator for deploying and
managing containerized applications. Building on 15 years of experience
running production workloads at Google, it provides the advantages
inherent to containers, while enabling DevOps teams to build
container-ready environments which are customized to their needs.
The Kubernetes architecture is comprised of loosely coupled components
combined with a rich set of APIs, making Kubernetes well-suited
for running highly distributed application architectures, including
microservices, monolithic web applications and batch applications. In
production, these applications typically span multiple containers across
multiple server hosts, which are networked together to form a cluster.
Kubernetes provides the orchestration and management capabilities
required to deploy containers for distributed application workloads. It
enables users to build multi-container application services and schedule
the containers across a cluster, as well as manage the health of the
containers. Because these operational tasks are automated, DevOps team
can now do many of the same things that other application platforms
enable them to do, but using containers.

But configuring and deploying Kubernetes can be hard

It’s commonly believed that Kubernetes is the key to successfully
operationalizing containers at scale. This may be true if you are
running a single Kubernetes cluster in the cloud or have reasonably
homogenous infrastructure. However, many organizations have a diverse
application portfolio and user requirements, and therefore have more
expansive and diverse needs. In these situations, setting up and
configuring Kubernetes, as well as automating infrastructure deployment,
gives rise to several challenges:

  1. Creating a Kubernetes environment that is customized to the DevOps
    teams’ needs
  2. Automating the deployment of multiple Kubernetes clusters
  3. Managing the health of Kubernetes clusters (e.g. detecting and
    recovering from etcd node problems)
  4. Automating the upgrade of Kubernetes clusters
  5. Deploying multiple clusters on premises and/or across disparate
    cloud providers
  6. Ensuring enterprise readiness, including access to 24×7 support
  7. Customizing then repeatedly deploying multiple combinations of
    infrastructure and other services (e.g. storage, networking, DNS,
    load balancer)
  8. Deploying and managing upgrades for Kubernetes add-ons such as
    Dashboard, Helm and Heapster

Rancher is designed to make Kubernetes easy

Containers make software development easier by making code portable
across development, test, and production environments. Once in
production, many organizations look to Kubernetes to manage and scale
their containerized applications and services. But setting up,
customizing and running Kubernetes, as well as combining the
orchestrator with a constantly changing set of technologies, can be
challenging with a steep learning curve. The Rancher container
management platform makes it easy for you to manage all aspects of
running containers. You no longer need to develop the technical skills
required to integrate and maintain a complex set of open source
technologies. Rancher is not a Docker orchestration tool—it is the
most complete container management platform. Rancher includes everything
you need to make Kubernetes work in production on any infrastructure,
including:

  • A certified and supported Kubernetes distribution with simplified
    configuration options
  • Infrastructure services including load balancers, cross-host
    networking, storage drivers, and security credentials management
  • Automated deployment and upgrade of Kubernetes clusters
  • Multi-cluster and multi-cloud suport
  • Enterprise-class features such as role-based access control and 24×7
    support

We included a fully supported Kubernetes distro

The certified and supported Kubernetes distribution included with
Rancher makes it easy for you to take advantage of proven, stable
Kubernetes features. Kubernetes can be launched via the easy to use
Rancher interface in a matter of minutes. To ensure a consistent
experience across all public and private cloud environments, you can
then leverage Rancher to manage underlying containers, execute commands,
and fetch logs. You can also use it to stay up-to-date with the
latest stable Kubernetes release as well as adopt upstream bug fixes in
a timely manner. You should never again be stuck with old, outdated and
proprietary technologies. The Kubernetes Dashboard can be automatically
started via Rancher, and made available for each Kubernetes environment.
Helm is automatically made available for each Kubernetes environment as
well, and a convenient Helm client is included in the out-of-the-box
kubectl shell console.

We make Kubernetes enterprise- and production-ready

Rancher makes it easy to adopt open source Kubernetes while complying
with corporate security and availability standards. It provides
enterprise readiness via a secure, multi-tenant environment, isolating
resources within clusters and ensuring separation of controls. A private
registry can be configure that is used by Kubernetes and tightly coupled
to the underlying cluster (e.g. Google Cloud Platform registry can be
used only in a GCP cluster, etc.). Features such as role-based access
control, integration with LDAP and active directories, detailed audit
logs, high-availability, metering (via Heapster), and encrypted
networking are available out of the box. Enterprise-grade 24x7x365
support provides you with the confidence to deploy Kubernetes and
Rancher in production at any scale.

**Multi-cluster, multi-cloud deployments? No problem **

Kubernetes eBook
Quickly get started with Rancher and Kubernetes by following the
step-by-step instructions in the latest release of the Kubernetes
eBook
.
Rancher makes it possible to run multi-node, multi-cloud clusters, and
even deploy stateful applications. With Rancher, Kubernetes clusters
can span multiple resource pools and clouds. All hosts that are added
using Docker machine drivers or manual agent registration will
automatically be added to the Kubernetes cluster. The simple to use
Rancher user interface provides complete visibility into all hosts, the
containers running in those hosts, and their overall status.

But you need more than just container orchestration…

Kubernetes is maturing into a stable platform. It has strong adoption
and ecosystem growth. However, it’s important not to lose sight that
the end goal for container adoption is to make it easier and more
efficient for developers to create applications and for operations to
manage them. Application deployment and management requires more than
just orchestration. For example, services such as load balancers and
DNS are required to run the applications.

Customizable infrastructure services

The Rancher container management platform makes it easy to define and
save different combinations of networking, storage and load balancer
drivers as environments. This enables users to repeatedly deploy
consistent implementations across any infrastructure, whether it is
public cloud, private cloud, a virtualized cluster, or bare-metal
servers. The services integrated with Rancher include:

  • Ingress controller with multiple load balancer implementations
    (HAproxy, traefik, etc.)
  • Cross-host networking drivers for IPSEC and VXLAN
  • Storage drivers
  • Certificate and security credentials management
  • Private registry credential management
  • DNS service, which is a drop-in replacement for SkyDNS
  • Highly customizable load balancer

If you choose to deploy an ingress controller on native Kubernetes, each
provider will have its own code base and set of configuration values.
However, Rancher load balancer has a high level of customization to meet
user needs. The Rancher ingress controller provides the flexibility to
select your load balancer of choice—including HAproxy, Traefik, and
nginx—while the configuration interface remains the same. Rancher also
provides the ability to scale the load balancer, customize load balancer
source ports, and schedule the load balancer on a specific set of hosts.

A complete container management platform

You’ve probably figured this out for yourself by now but, to be clear,
Rancher is NOT a container orchestrator. It is a complete container
management platform that includes everything you need to manage
containers in production. You can quickly deploy and run multiple
clusters across multiple clouds with a click of the button using Rancher
or select from one of the integrated and supported container
orchestrator distributions, including Kubernetes as well as Mesos,Docker
Swarm and Windows. Pluggable infrastructure services provide the basis
for portability across infrastructure providers Whether running
containers on a single on-premises cluster or multiple clusters running
on Amazon AWS and other service providers, Rancher is quickly becoming
the container management platform of choice for thousands of Kubernetes
users.

Get started with containers, Kubernetes, and Rancher today!

For step-by-step instructions on how to get started with Kubernetes
using the Rancher container management platform, please refer to the
Kubernetes eBook, which is available
here. Or,
if you are heading to KubeCon 2017 in Berlin, stop by booth S17 and we
can give you an in-person demonstration. Louise is the Vice
President of Marketing at Rancher Labs where she is focused on defining
and executing impactful go-to-market strategy and marketing programs by
analyzing customer needs and market trends. Prior to joining Rancher,
Louise was Marketing Director for IBM’s Software Defined Infrastructure
portfolio of big data, cloud native and high performance computing
management solutions. Before the company was acquired by IBM in 2012,
Louise was Director of Marketing at Platform Computing. She has 15+
years of marketing and product management experience, including roles at
SGI and Sun Microsystems. Louise holds an MBA from Santa Clara
University’s Leavey School of Business and a Bachelor’s degree from
University of California, Davis. You can follow Louise in Twitter
@lwestoby.

Rancher Labs and NeuVector Partner to Deliver Management and Security for Containers

mardi, 21 mars, 2017

DevOps can now efficiently and securely deploy containers for enterprise applications

As more enterprises move to a container-based application deployment
model, DevOps teams are discovering the need for management and
orchestration tools to automate container deployments. At the same time,
production deployments of containers for business critical applications
require specialized container-intelligent security tools.

To address
this, Rancher Labs and NeuVector today
announced

that they have partnered to make container security as easy to deploy as
application containers. You can now easily
deploy
the NeuVector container network
security solution with the Rancher container management platform. The
first and only container network security solution in the
Rancher application catalog, the addition
of NeuVector provides simple deployment of the
NeuVector containers into an enterprise container environment. NeuVector
secures containers where they have been most vulnerable: in production
environments where they are constantly being deployed, updated, moved,
and scaled across hosts and data centers. With constant behavioral
learning automatically applied to security policies for containers, the
NeuVector container network security
solution
delivers multi-layered
protection for containers and their hosts. Protection includes violation
and threat detection, vulnerability scanning, and privilege escalation
detection for hosts and containers. With one click in the Rancher
console, users can choose to deploy the NeuVector containers. Sample
configuration files are provided, and minimal setup is required before
deployment. Once the NeuVector containers are deployed, they instantly
discover running containers and automatically build a whitelist based
policy to protect them. Like Rancher, NeuVector supports cross host,
data center, and cloud deployments, relieving DevOps teams of
error-prone manual configurations for mixed environments.
Deploy the NeuVector security containers with a click of a button. View
the demo
. In addition to
production use, NeuVector is also valuable for debugging of application
connections during testing, and can be used after violations are
detected for forensic investigation. A convenient network packet capture
tool assists with investigations during test, production, and incident
management. Henrik Rosendahl is Head of Business Development for
NeuVector. He is a serial enterprise software
entrepreneur and was the co-founder of CloudVolumes – named one of Five
Strategic Acquisitions That Reshaped VMware by Forbes. He is a frequent
speaker at VMworld, SNW, CloudExpo, and InterOp.

Tags: ,, Category: Non classé Comments closed

DevOps and Containers, On-Prem or in the Cloud

mardi, 14 mars, 2017

The cloud vs.
on-premises debate is an old one. It goes back to the days when the
cloud was new and people were trying to decide whether to keep workloads
in on-premises datacenters or migrate to cloud hosts. But the Docker
revolution has introduced a new dimension to the debate. As more and
more organizations adopt containers, they are now asking themselves
whether the best place to host containers is on-premises or in the
cloud. As you might imagine, there’s no single answer that fits
everyone. In this post, we’ll consider the pros and cons of both cloud
and on-premises container deployment and consider which factors can make
one option or the other the right choice for your organization.

DevOps, Containers, and the Cloud

First, though, let’s take a quick look at the basic relationship
between DevOps, containers, and the cloud. In many ways, the combination
of DevOps and containers can be seen as one way—if not the native
way—of doing IT in the cloud. After all, containers maximize
scalability and flexibility, which are key goals of the DevOps
movement—not to mention the primary reasons for many people in
migrating to the cloud. Things like virtualization and continuous
delivery seem to be perfectly suited to the cloud (or to a cloud-like
environment), and it is very possible that if DevOps had originated in
the Agile world, it would have developed quite naturally out of the
process of adapting IT practices to the cloud.

DevOps and On-Premises

Does that mean, however, that containerization, DevOps, and continuous
delivery are somehow unsuited or even alien to on-premises deployment?
Not really. On-premises deployment itself has changed; it now has many
of the characteristics of the cloud, including a high degree of
virtualization, and relative independence from hardware constraints
through abstraction. Today’s on-premises systems generally fit the
definition of “private cloud,” and they lend themselves well to the
kind of automated development and operations cycle that lies at the
heart of DevOps. In fact, many of the major players in the
DevOps/container world, including AWS and Docker, provide strong support
for on-premises deployment, and sophisticated container management tools
such as Rancher are designed to work seamlessly across the
public/private cloud boundary. It is no exaggeration to say that
containers are now as native to the on-premises world as they are to the
cloud.

Why On-premises?

Why would you want to deploy containers on-premises? Local Resources
Perhaps the most obvious reason is the need to directly access and use
hardware features, such as storage, or processor-specific operations.
If, for example, you are using an array of graphics chips for
matrix-intensive computation, you are likely to be tied to local
hardware. Containers, like virtual machines, always require some degree
of abstraction, but running containers on-premises reduces the number of
layers of abstraction between the application and underlying metal to a
minimum. You can go from the container to the underlying OS’s hardware
access more or less directly—something which is not practical with VMs
on bare metal, or with containers in the public cloud. Local
Monitoring
In a similar vein, you may also need containers to monitor,
control, and manage local devices. This may be an important
consideration in an industrial setting, or a research facility, for
example. It is, of course, possible to perform monitoring and control
functions with more traditional types of software—The combination of
containerization and continuous delivery, however, allows you to quickly
update and adapt software in response to changes in manufacturing
processes or research procedures. Local Control Over Security
Security may also be a major consideration when it comes to deploying
containers on-premises. Since containers access resources from the
underlying OS, they have potential security vulnerabilities; in order to
make containers secure, it is necessary to take positive steps to add
security features to container systems. Most container-deployment
systems have built-in security features. On-premises deployment,
however, may be a useful strategy for adding extra layers of security.
In addition to the extra security that comes with controlling access to
physical facilities, an on-premises container deployment may be able to
make use of the built-in security features of the underlying hardware.
Legacy Infrastructure and Cloud Migration What if you’re not in a
position to abandon existing on-premises infrastructure? If a company
has a considerable amount of money invested in hardware, or is simply
not willing or able to migrate away from a large and complex set of
interconnected legacy applications all at once, staying on-premises for
the time being may be the most practical (or the most politically
prudent) short-to-medium-term choice. By introducing containers (and
DevOps practices) on-premises, you can lay out a relatively painless
path for gradual migration to the cloud. Test Locally, Deploy in the
Cloud
You may also want to develop and test containerized applications
locally, then deploy in the cloud. On-premises development allows you to
closely monitor the interaction between your software and the deployment
platform, and observe its operation under controlled conditions. This
can make it easier to isolate unanticipated post-deployment problems by
comparing the application’s behavior in the cloud with its behavior in
a known, controlled environment. It also allows you to deploy and test
container-based software in an environment where you can be confident
that information about new features or capabilities will not be leaked
to your competitors.

Public/Private Hybrid

Here’s another point to consider when you’re comparing cloud and
on-premises container deployment: public and private cloud deployment
are not fundamentally incompatible, and in many ways, there is really no
sharp line between them. This is, of course, true for traditional,
monolithic applications (which can, for example, also reside on private
servers while being accessible to remote users via a cloud-based
interface), but with containers, the public/private boundary can be made
even more fluid and indistinct when it is appropriate to do so. You can,
for example, deploy an application largely by means of containers in the
public cloud, with some functions running on on-premises containers.
This gives you granular control over things such as security or
local-device access, while at the same time allowing you to take
advantage of the flexibility, broad reach, and cost advantages of
public-cloud deployment.

The Right Mix for Your Organization

Which type of deployment is better for your company? In general,
startups and small-to-medium-size companies without a strong need to tie
in closely to hardware find it easy to move into (or start in) the
cloud. Larger (i.e. enterprise-scale) companies and those with a need to
manage and control local hardware resources are more likely to prefer
on-premises infrastructure. In the case of enterprises, on-premises
container deployment may serve as a bridge to full public-cloud
deployment, or hybrid private/public deployment. The bottom line,
however, is that the answer to the public cloud vs. on-premises question
really depends on the specific needs of your business. No two
organizations are alike, and no two software deployments are alike, but
whatever your software/IT goals are, and however you plan to achieve
them, between on-premises and public-cloud deployment, there’s more
than enough flexibility to make that plan work.

Tags: ,, Category: Non classé Comments closed

Containers and Application Modernization: Extend, Refactor, or Rebuild?

lundi, 27 février, 2017

Technology is a
constantly changing field, and as a result, any application can feel out
of date in a matter of months. With this constant feeling of impending
obsolescence, how can we work to maintain and modernize legacy
applications? While rebuilding a legacy application from the ground up
is an engineer’s dream, business goals and product timelines often make
this impractical. It’s difficult to justify spending six months
rewriting an application when the current one is working just fine, code
debt be damned. Unfortunately, we all know that product development is
never that black and white. Compromises must be made on both sides of
the table, meaning that while a complete rewrite might not be possible,
the long-term benefits of application modernization efforts must still
be valued. While many organizations don’t have the luxury of building
brand new, cloud-native applications, there are still techniques that
can be used to modernize existing applications using container
technology like Docker. These modernization techniques ultimately fall
into three different categories: extend, refactor, and rebuild. But
before we get into them, let’s first touch on some Dockerfile basics.

Dockerfile Basics

For the uninitiated, Docker is a containerization platform that “wraps
a piece of software in a complete filesystem that contains everything
needed to run: code, runtime, system tools, system libraries” and
basically everything that can be installed on a server, without the
overhead of a virtualization platform. While the pros and cons of
containers are out of the scope of this article, one of the biggest
benefits of Docker is the ability to quickly and easily spin up
lightweight, repeatable server environments with only a few lines of
code. This configuration is accomplished through a file called the
Dockerfile, which is essentially a blueprint that Docker uses to build
container images. For reference, here’s a Dockerfile that spins up a
simple Python-based web server (special thanks to Baohua
Yang
for the awesome example):

# Use the python:2.7 base image
 FROM python:2.7

# Expose port 80 internally to Docker process
 EXPOSE 80

# Set /code to the working directory for the following commands
 WORKDIR /code

# Copy all files in current directory to the /code directory
 ADD . /code

# Create the index.html file in the /code directory
 RUN touch index.html

# Start the python web server
 CMD python index.py

This is a simplistic example, but it does a good job of illustrating
some Dockerfile basics, namely extending pre-existing images, exposing
ports, and running commands and services. Even these few instructions
can be used to spin up extremely powerful microservices, as long as the
base source code is architected properly.

Application Modernization

At a high level, containerizing an existing application is a relatively
straightforward process, but unfortunately not every application is
built with containerization in mind. Docker has an ephemeral filesystem,
which means that storage within a container is not persistent. Any file
that is saved within a Docker container will be lost unless specific
steps are taken to avoid this. Additionally, parallelization is another
big concern with containerized applications. Because one of the big
benefits of Docker is the ability to quickly adapt to increasing traffic
requirements, these applications need to be able to run in parallel with
multiple instances. As mentioned above, in order to prepare a legacy
application for containerization, there are a few options available:
extend, refactor, or rebuild. But which solution is the best depends
entirely on the needs and resources of an organization.

Extend

Extending the existing functionality of a non-containerized application
often requires the least amount of commitment and effort on this list,
but if it isn’t done right, the changes that are made can lead to
significantly more technical debt. The most effective way to extend an
existing application with container technology is through microservices
and APIs. While the legacy application itself isn’t being
containerized, isolating new features into Docker-based microservices
allows for the modernization of a product, and at the same time tees the
legacy code up for easier refactoring or rebuilding in the future.

At a high level, extension is a great choice for applications that are
likely to be rebuilt or sunset at some point in the not-too-distant
future—but the older the codebase, the more it might be necessary to
completely refactor certain parts of it to accommodate a Docker
platform.

Refactor

Sometimes, extending an application through microservices or APIs isn’t
practical or possible. Whether there is no new functionality to be
added, or the effort to add new features through extension is too high
to justify, refactoring parts of a legacy codebase might be necessary.
This can be easily accomplished by isolating individual pieces of
existing functionality from the current application into containerized
microservices. For example, refactoring an entire social network into a
Docker-ready application might be impractical, but pulling out the piece
of functionality that runs the user search engine is a great way to
isolate individual components as separate Docker containers.

Another great place to refactor a legacy application is the storage
mechanism used for writing things like logs, user files, etc. One of the
biggest roadblocks to running an application within Docker is the
ephemeral filesystem. Dealing with this can be handled in one of a few
ways, the most popular of which is through the use of a cloud-based
storage method like Amazon S3 or Google Cloud Storage. By refactoring
the file storage method to utilize one of these platforms, an
application can be easily run in a Docker container without losing any
data.

Rebuild

When a legacy application is unable to support multiple running
instances, it might be impossible to add Docker support without
rebuilding it from the ground up. Legacy applications can have a long
shelf life, but there comes a point when poor architecture and design
decisions made in the early stages of an application can prevent
efficient refactoring of an application in the future. Being aware of
impending development brick walls is crucial to identifying risks to
productivity.

Ultimately, there is no hard rule when it comes to modernizing legacy
applications with container technology. The best decision is often the
one that is dictated by both the needs of the product and the needs of
the business, but understanding how this decision affects the
organization in the long run is crucial to ensuring a stable application
without losing productivity.

To learn more about using containers, join our February Online
Meetup: More Tips and Tricks for Running Containers Like a
Pro
,
happening Tuesday, Feb 28.

Zachary Flower (@zachflower) is a
freelance web developer, writer, and polymath. He’s built projects for
the NSA and created features for companies like Name.com and Buffer.

Tags: , Category: Non classé Comments closed

Playing Catch-up with Docker and Containers

vendredi, 17 février, 2017

This article is essentially a guide to getting started with Docker for
people who, like me, have a strong IT background but feel a little
behind the curve when it comes to containers. We live in an age where
new and wondrous technologies are being introduced into the market
regularly. If you’re an IT professional, part of your job is to identify
which technologies are going to make it into the toolbox for the average
developer, and which will be relegated to the annals of history. Docker
is one of those technologies that sounded interesting when it first
debuted in 2013, but was easy to ignore because at the time it was not
clear whether Docker would ever graduate beyond something that
developers liked to play with in their spare time. Personally, I didn’t
pay close attention to Docker containers in Docker’s early days. They
got lost amid all the other noise in the IT world. That’s why, in 2016,
as Docker continued to rise in prominence, I realized that I’d missed
the container boat. Docker was becoming a must-know technology, and I
was behind the curve. If you’re reading this, you may well be in a
similar position. But there’s good news:
Register now for free online training on deploying containers with
Rancher Container technology, and Docker specifically, are
not hard to pick up and learn if you already have a background in IT.

Sure, containers can be a little scary when you’re first getting
started, just like any new technology. But rest assured that it’s not
too late to get on the container train, even if you weren’t writing
Docker files back in 2013. I’ll explain what Docker is and how container
technology works, then go through the first steps in setting Docker up
on your workstation and getting a container running that you can
interact with. Finally, I’ll direct you to some of the resources I used
to familiarize myself with Docker, so you can continue your journey.

What is Docker and How Does it Work?

Docker is technology that allows you to create and deploy an application
together with a filesystem and everything needed to run it. The Docker
container, as it is called, can be installed on any machine, as long as
the Docker engine has been installed, and can be expected to always run
in the same manner. A physical machine with the Docker Engine installed
can host multiple Docker containers, each sharing the resources of the
host machine. You may already be familiar with machine virtualization,
either as a result of running local virtual machines using VMware on
your workstations, or interacting with cloud services like Amazon Web
Services or Microsoft Azure. Container technology is similar in some
ways, and different in others. Let’s start by comparing the two by
looking at the diagram below which shows the basic structure of a
machine hosting Docker containers, and another hosting virtual machines.
In both cases the host machine has its infrastructure and host operating
system. Virtual machines then require a hypervisor which is software or
firmware that allows virtual machines to be hosted. The virtual machines
themselves each contain their own operating system and the application,
together with its required binaries, libraries and any other
dependencies. Similarly, the machine hosting the Docker containers has
its own infrastructure and operating system. Instead of the hypervisor,
it has the Docker Engine installed, and this is what interacts with the
containers. Each container holds its application and the required
binaries, libraries and other dependencies. It is important to note that
they don’t require their own guest operating system. This allows the
containers to be significantly smaller in size, and able to be
distributed, deployed and started in a fraction of the time taken by
virtual machines.

Other key differences are that virtual machines have specifically
allocated access to the system resources, while Docker containers share
host system resources through the Docker engine.

Installing Docker and Discovering Docker Hub

I can’t think of a better way to learn about new technology than to
install it, and get your hands dirty. Let’s install the Docker Engine on
your workstation and a simple Docker container. Before we can deploy a
container, we’ll need the Docker Engine. This is the platform that will
host the container and allow it to interact with the underlying
operating system. You’ll want to pick the appropriate download from the
Docker products page, and
install it on your workstation. Downloads are available for OS X,
Windows, Linux, and a host of other operating systems. Once we have the
Docker platform installed, we’re now ready to get a container running.
Before we do that though, let’s familiarize ourselves with Docker
Hub
. Docker Hub is a central repository for
Docker Container images. Let’s pretend that you’re working on a Windows
machine, and you’d like to deploy an app on SUSE Linux. If you go to
Docker Hub, and search for OpenSuse, you’ll be shown a list of
repositories. At the time of writing there were 212 repositories listed.
You’ll want to look for the “official” repository. The official
repositories are maintained by a team of engineers sponsored by Docker.
Official repositories have clear documentation and promote best
practices. Now search for BusyBox.
Busybox is a tiny Unix distribution, which provides all of the
functionality we’ll need for this example. If you go to the official
repository, you’ll be able to read some good documentation on the image.
Let’s get a BusyBox container running on your workstation.

Getting Your First Container Running

Assuming you’ve installed the Docker Engine, open a new command prompt
on your workstation. If you’re on a Windows machine, I’d recommend using
the Docker Quick Start link which was included as part of your
installation. This will launch an interactive shell that will make it
easier to work with Docker. You don’t need this on IOS or other
Linux-based system. Enter the following command:

$ docker run -it --rm busybox

This will search the local machine for the latest BusyBox image, and
then download it from DockerHub if it isn’t found. The process should
take only a couple of seconds, and you should have something similar to
the the text shown below on your screen:

$ docker run -it --rm busybox
Unable to find image `busybox:latest` locally
latest: Pulling from library/busybox
4b0b=bc1c4050b: Pull complete
Digest: sha256”817q12c32a39bbe394944ba49de563e08f1d3c5266eb89723256bc4448680e
Status: Downloaded newer image for busybox:latest
/ #

We started a new Docker container, using the BusyBox image. We used the
-it parameters to specify that we want an interactive, pseudo TTY
session, and the –rm flag indicates that we want to delete the
container once we exit it. If you execute a command like ‘ls’ you’ll see
that you have access to a new Linux filesystem. Play around a little,
and when you’re done, enter `exit` to exit the container, and remove
it from the system. Congratulations! You’ve now created, interacted
with, and shut down your own Docker container.

Creating Your Own Docker Image

Being able to start up and close down a container is fun, but it doesn’t
have much practical use. Let’s start a new container, install something
on it, and then save it as a container for someone else to use. We’ll
start with a Debian container, install Git on it, and then save it for
later use. This time, we’ll start the container without the –rm flag,
and we’ll specify a version to use as well. Type the following into your
command prompt:

$ docker run -it debian:jessie

You should now have a Debian container running—specifically the jessie
tag/release from Docker Hub. Type the `git` command when you have the
container running. You should observe something similar to the
following:

root@4a4882a7ed59:/# git
bash: git: command not found

So it appears this container doesn’t have Git installed. Let’s rectify
that situation by installing Git:

root@4a4882a7ed59:# apt-get update && apt-get install -y git

This may take a little longer to run, but it will update the apt-get
utility, and then install Git. When it finishes up, type `git` again.
Voila! At this point, we have a container started, and we’ve installed
Git. We started the container without the –rm parameter, so when we
exit it, it won’t destroy the container. Let’s exit now. Type `exit`.
Now we want to get the ID of the container we just ran. To find this, we
type the following command:

$ docker ps -a

You should now see a list of recent containers. My results looked
similar to what’s below:

CONTAINER ID       IMAGE            COMMAND       CREATED        STATUS                          PORTS       NAMES
4a4882a7ed59       debian:jessie    “/bin/bash”   9 minutes ago  Exited (1) About a minute ago               hungry_fermet

It can be a little hard to read, especially if the results get wrapped
in your command window. What we’re looking for is the container ID,
which in my case was 4a4882a7ed59. Yours will be different, but similar
in format. Run the following command, replacing my container ID with
yours. Test:example are arbitrary names as well—Test will be the
name of your saved image, and example will be the version or tag of
that image.

$ docker commit 4a4882a7ed59 test:example

You should see a sha256 response once the container is saved. Now, run
the following to list all the images available on your local machine:

$ docker images

Docker will list the images on your machine. You should be able to find
a repository called test with a tag of example. Let’s see if it worked.
Start up your container using the following command, assuming you saved
your image with the same name and tag as I did.

$ docker run -it test:example

Once you have it running, try and execute the git command. It should
return with a list of possible options for Git. You did it! You created
a custom image of Debian with Git installed. You’re practically a Docker
Master at this point.

Following the Container Ecosystem

Using containers effectively also requires a familiarity with the trends
that are defining the container ecosystem. In 2013, when Docker debuted,
the ecosystem consisted of, well, Docker. But it has changed in big ways
since then. Orchestrators, which automate the provisioning of
infrastructure for containers, have evolved and become an essential part
of large-scale container deployment. Storage options have become more
sophisticated, simplifying the task of moving data between containers
and external, persistent storage systems. Monitoring solutions for
containers have been extended from basic tools like the Docker stats
command to include commercial monitoring and APM tools designed for
containers. And Docker now even runs on Windows as well as Linux (albeit
with some important caveats, like limited networking support at this
time). Discussing all of the container ecosystem trends in detail is
beyond the scope of this article. But in order to make the most of
containers, you should follow the news in the container ecosystem to
gain a sense of what is coming next as containers and the solutions that
support them become more and more sophisticated.

Continuing to Learn About Containers

Obviously this just scratches the surface of what containers offers, but
this should give you a good start, and afford you enough of a base of
understanding to create, modify and deploy your own containers locally.
If you would like to know more about Docker, the Web is full of useful
tutorials and additional information:

Mike Mackrory is a Global citizen who has settled down in the Pacific
Northwest – for now. By day he works as a Senior Engineer on a Quality
Engineering team and by night he writes, consults on several web based
projects and runs a marginally successful eBay sticker business. When
he’s not tapping on the keys, he can be found hiking, fishing and
exploring both the urban and the rural landscape with his kids.

Tags: , Category: Non classé Comments closed

Introducing Containers into Your DevOps Processes: Five Considerations

mercredi, 15 février, 2017

Docker
has been a source of excitement and experimentation among developers
since March 2013, when it was released into the world as an open source
project. As the platform has become more stable and achieved increased
acceptance from development teams, a conversation about when and how to
move from experimentation to the introduction of containers into a
continuous integration environment is inevitable. What form that
conversation takes will depend on the players involved and the risk to
the organization. What follows are five important considerations which
should be included in that discussion.

Define the Container Support Infrastructure

When you only have a developer or two experimenting with containers, the
creation and storage of Docker images on local development workstations
is to be expected, and the stakes aren’t high. When the decision is made
to use containers in a production environment, however, important
decisions need to be made surrounding the creation and storage of Docker
images. Before embarking on any kind of production deployment journey,
ask and answer the following questions:

  • What process will be followed when creating new images?

    • How will we ensure that images used are up-to-date and secure?
    • Who will be responsible for ensuring images are kept current,
      and that security updates are applied regularly?
  • Where will our Docker images be stored?

    • Will they be publicly accessible on DockerHub?
    • Do they need to be kept in a private repository? If so, where
      will this be hosted?
  • How will we handle the storage of secrets on each Docker image? This
    will include, but is not limited to:

    • Credentials to access other system resources
    • API keys for external systems such as monitoring
  • Does our production environment need to change?

    • Can our current environment support a container-based approach
      effectively?
    • How will we manage our container deployments?
    • Will a container-based approach be cost-effective?

Don’t Short-Circuit Your Continuous Integration Pipeline

Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Perhaps one of Docker’s best features is that a
container can reasonably be expected to function in the same manner,
whether deployed on a junior developer’s laptop or on a top-of-the-line
server at a state-of-the-art data center. Therefore, development teams
may be tempted to assume that localized testing is good enough, and that
there is limited value from a full continuous integration (CI) pipeline.
What the CI pipeline provides is stability and security. By running all
code changes through an automated set of tests and assurances, the team
can develop greater confidence that changes to the code have been
thoroughly tested.

Follow a Deployment Process

In the age of DevOps and CI, we have the opportunity to deliver bug
fixes, updates and new features to customers faster and more efficiently
than ever. As developers, we live for solving problems and delivering
quality that people appreciate. It’s important, however, to define and
follow a process that ensures key steps aren’t forgotten in the thrill
of deployment. In an effort to maximize both uptime and delivery of new
functionality, the adoption of a process such as blue-green deployments
is imperative (for more information, I’d recommend Martin Fowler’s
description of Blue Green
Deployment
).
The premise as it relates to containers is to have both the old and new
containers in your production environment. Use of dynamic load balancing
to slowly and seamlessly shift production traffic from the old to the
new, whilst monitoring for potential problems, permits relatively easy
rollback should issues be observed in the new containers.

Don’t Skimp on Integration Testing

Containers may run the same, independently of the host system, but as we
move containers from one environment to another, we run the risk of
breaking our external dependencies, whether they be connections to
third-party services, databases, or simply differences in the
configuration from one environment to another. For this reason, it is
imperative that we run integration tests whenever a new version of a
container is deployed to a new environment, or when changes to an
environment may affect the interactions of the containers within.
Integration tests should be run as part of your CI process, and again as
a final step in the deployment process. If you’re using the
aforementioned blue-green deployment model, you can run integration
tests against your new containers before configuring the proxy to
include the new containers, and again once the proxy has been directed
to point to the new containers.

Ensure that Your Production Environment is Scalable

The ease with which containers can be created and destroyed is a
definite benefit of containers, until you have to manage those
containers in a production environment. Attempting to do this manually
with anything more than one or two containers would be next to
impossible. Consider this with a deployment containing multiple
different containers, scaled at different levels, and you face an
impossible task. []

When considering the inclusion of container technology as part of the DevOps
process and putting containers into production, I’m reminded of some
important life advice I received many years ago—“Don’t do dumb
things.” Container technology is amazing, and offers a great deal to
our processes and our delivery of new solutions, but it’s important that
we implement it carefully. Mike Mackrory is a Global citizen who has
settled down in the Pacific Northwest – for now. By day he works as a
Senior Engineer on a Quality Engineering team and by night he writes,
consults on several web based projects and runs a marginally successful
eBay sticker business. When he’s not tapping on the keys, he can be
found hiking, fishing and exploring both the urban and the rural
landscape with his kids.

Tags: , Category: Non classé Comments closed

Containers: Making Infrastructure as Code Easier

mardi, 31 janvier, 2017

Containers and Infrastructure as
CodeWhat
do Docker containers have to do with Infrastructure as Code (IaC)? In a
word, everything. Let me explain. When you compare monolithic
applications to microservices, there are a number of trade-offs. On the
one hand, moving from a monolithic model to a microservices model allows
the processing to be separated into distinct units of work. This lets
developers focus on a single function at a time, and facilitates testing
and scalability. On the other hand, by dividing everything out into
separate services, you have to manage the infrastructure for each
service instead of just managing the infrastructure around a single
deployable unit. Infrastructure as Code was born as a solution to this
challenge. Container technology has been around for some time, and it
has been implemented in various forms and withvarying degrees of
success, starting with chroot in the early 1980s and taking the form of
products such as Virtuozzo and Sysjail since
then. It wasn’t until Docker burst onto the scene in 2013 that all the
pieces came together for a revolution affecting how applications can be
developed, tested and deployed in a containerized model. Together with
the practice of Infrastructure as Code, Docker containers represent one
of the most profoundly disruptive and innovative changes to the process
of how we develop and release software today.

What is Infrastructure as Code?

Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Before we delve into Infrastructure as Code and how
it relates to containers, let’s first look at exactly what we mean when
we talk about IaC. IaC refers to the practice of scripting the
provisioning of hardware and operating system requirements concurrently
with the development of the application itself. Typically, these scripts
are managed in a similar manner to the software code base, including
version control and automated testing. When properly implemented, the
need for an administrator to log into a new machine and configure it
manually is replaced by scripts which describe the ideal state of the
new machine, and execute the necessary steps in order to configure the
machine to realize that state.

Key Benefits Realized in Infrastructure as Code

IaC seeks to relieve the most common pain points with system
configuration, especially the fact that configuring a new environment
can take a significant amount of time. Each environment needs to be
configured individually, and when something goes wrong, it can often
require starting the process all over again. IaC eliminates these pain
points, and offers the following additional benefits to developers and
operational staff:

  1. Relatively easy reuse of common scripts.
  2. Automation of the entire provisioning process, including being able
    to provision hardware as part of a continuous delivery process.
  3. Version control, allowing newer configurations to be tested and
    rolled back as necessary.
  4. Peer review and hardening of scripts. Rather than manual
    configuration from documentation or memory, scripts can be reviewed,
    updated and continually improved.
  5. Documentation is automatic, in that it is essentially the scripts
    themselves.
  6. Processes are able to be tested.

Taking Infrastructure as Code to a Better Place with Containers

As developers, I think we’re all familiar with some variant of, “I don’t
know mate, it works on my machine!” At best, it’s mildly amusing to
utter, and at worst it represents one of the key frustrations we deal
with on a daily basis. Not only does the Docker revolution effectively
eliminate this concern, it also brings IaC into the development process
as a core component. To better illustrate this, let’s consider a
Dockerized web application with a simple UI. The application would have
a Dockerfile similar to the one shown below, specifying the
configuration of the container which will contain the application.

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y && apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql

# Install app
RUN rm -rf /var/www/*
ADD src /var/www

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

If you’re familiar with Docker, this is a fairly typical and simple
Dockerfile, and you should already know what it does. If you’re not
familiar with the Dockerfile, understand that this file will be used to
create a Docker image, which is essentially a template that will be used
to create a container. When the Docker container is created, the image
will be used to build the container, and a self-contained application
will be created. It will be available for use on whatever machine it is
instantiated on, from developer workstation to high-availability cloud
cluster. Let’s look at a couple of key elements of the file, and explore
what they accomplish in the process.

FROM ubuntu:12.04

This line pulls in an Ubuntu Docker image from Docker Hub to use as the
base for your new container. Docker Hub is the primary online repository
of Docker images. If you visit Docker Hub and search for this image,
you’ll be taken to the repository for
Ubuntu
. The image is an
official image, which means that it is one of a library of images
managed by a dedicated team sponsored by Docker. The beauty of using
this image is that when something goes wrong with your underlying
technology, there is a good chance that someone has already developed
the fix and implemented it, and all you would need to do is update your
Dockerfile to reference the new version, rebuild your image, and test
and deploy your containers again. The remaining lines in the Dockerfile
install various packages on the base image using apt-get. Add the source
of your application to the /var/www directory, configure Apache, and
then set the exposed port for the container to port 80. Finally, the CMD
command is run when the container is brought up, and this will initiate
the Apache server and open it for http requests. That’s Infrastructure
as Code in its simplest form. That’s all there is to it. At this point,
assuming you have Docker installed and running on your workstation, you
could execute the following command from the directory in which the
Dockerfile resides.

$ docker build -t my_demo_application:v0.1

Docker will build your image for you, naming it my_demo_application
and tagging it with v0.1, which is essentially a version number. With
the image created, you could now take that image and create a container
from it with the following command.

$ docker run -d my_demo_application:v0.1

And just like that, you’ll have your application running on your local
machine, or on whatever hardware you choose to run it.

Taking Infrastructure as Code to a Better Place with Docker Containers and Rancher

A single file, checked in with your source code that specifies an
environment, configuration, and access for your application. In its
purest form, that is Docker and Infrastructure as Code. With that basic
building block in place, you can use docker-compose to define composite
applications with multiple services, each containing an individualized
Dockerfile, or an imported image for a Docker repository. For further
reading on this topic, and tips on implementation, check out Rancher’s
documentation on infrastructure
services
and
environment
templates
.
You can also read up on Rancher
Compose
,
which lets you define applications for multiple hosts. Mike Mackrory
is a Global citizen who has settled down in the Pacific Northwest – for
now. By day he works as a Senior Engineer on a Quality Engineering team
and by night he writes, consults on several web based projects and runs
a marginally successful eBay sticker business. When he’s not tapping on
the keys, he can be found hiking, fishing and exploring both the urban
and the rural landscape with his kids.

Tags: ,, Category: Non classé Comments closed

Security for your Container Environment

jeudi, 26 janvier, 2017

As
one of the most disruptive technologies in recent years, container-based
applications are rapidly gaining traction as a platform on which to
launch applications. But as with any new technology, the security of
containers in all stages of the software lifecycle must be our highest
priority. This post seeks to identify some of the inherent security
challenges you’ll encounter with a container environment, and suggests
base elements for a docker security plan to mitigate those vulnerabilities.

Benefits of a Container Environment and the Vulnerabilities They Expose

Before we investigate what aspects of your container infrastructure will
need to be covered by your security plan, it would be wise to identify
what potential security problems running applications in such an
environment will present. The easiest way to do this is to contrast a
typical virtual machine (VM) environment with that in use for a typical
container-based architecture. In a traditional VM environment, each
instance functions as an isolated unit. One of the downsides to this
approach is that each unit needs to have its own operating system
installed, and there is a cost both in terms of resources and initiation
time that needs to be incurred when starting a new instance.
Additionally, resources are dedicated to each VM, and might not be
available for use by other VMs running on the same base machine.
Rancher Free Ebook 'Comparing Kubernetes, Mesos and Swarm'
Free eBook: Compare architecture, usability, and feature sets for
Kubernetes, Mesos/Marathon, and Docker Swarm In a
container-based environment, each container comprises a bare minimum of
supporting functionality. There is no need to virtualize an
entireoperating system within each container and resource use is shared
between all containers on a device. The overwhelming benefit to this
approach is that initiation time is minimized, and resource usage is
generally more efficient. The downside is a significant loss in
isolation between containers, relative to the isolation that exists in a
VM environment, and this brings with it a number of security
vulnerabilities.

Identifying Vulnerabilities

Let’s identify some of the vulnerabilities that we inherit by virtue of
the container environment, and then explore ways to mitigate these, and
thus create a more secure environment in which to deploy and maintain
your containers.

  • Shared resources on the underlying infrastructure expose the risk of
    attack if the integrity of the container is compromised.

    • Access to the shared kernel key ring means that the user running
      the container has the same access within the kernel across all
      containers.
    • Denial of Service is possible if a container is able to access
      all resources in the underlying infrastructure.
    • Kernel modules are accessible by all containers and the kernel.
  • Exposing a port on a container opens it to all traffic by default.
  • Docker Hub and other public facing image repositories are “public.”
  • Compromised container secrets

Addressing the Problems of Shared Resources

Earlier versions of the Docker machine, especially those prior to
version 1.0, contained a vulnerability that allowed a user breakout of
the container and into the kernel of the host machine. Exploiting this
vulnerability when the container was running as the root user exposed
all kernel functionality to the person exploiting it. While this
vulnerability has been patched since version 1.0, it is still
inadvisable to run a container with a user who has anything more than
the minimum required privileges. If you are running containers with
access to sensitive information, it is also recommended that you
segregate different containers onto different virtual machines, with
additional security measures applied to the virtual machines as
well—although at this point, it may be worth considering whether using
containers to serve your application is the best approach. An additional
precaution you may want to consider is to install additional security
measure on the virtual machine, such as
SecComp
or other kernel security features. Finally, tuning the capabilities
available to containers using the *cap-add*and cap-drop flags when the
container is created can further protect your host machine from
unauthorized access.

Limiting Port Access Through Custom IPTables Rules

When configuring a Docker image, your Dockerfile might include a line
similar to “EXPOSE 80”—which opens port 80 for traffic into the
container by default. Depending on the access you are expecting or
allowing into your container, it may be advantageous to add rules to the
iptables on the image to restrict access on this port. The exact
commands may vary depending on the base container and rules you would
like to enforce, so it would be best to work with operations personnel
in implementing these rules.

Avoiding the Dangers Inherent with a Public Image Repository

As a repository for images, Docker Hub is an extremely valuable
resource. Docker Hub is also publically accessible, and harnesses the
power of the global community in the development and maintenance of
images. But it’s also publicly accessible, which introduces additional
risks alongside the benefits. If your container strategy involves usage
of images from Docker Hub or another public repository, it’s imperative
that you and your developers:

  • Know where the images came from and verify that you’re getting the
    image you expect.
  • Always specify a tag in your FROM statement; make it specific to a
    stable version of the image, and not “:latest”
  • Use the official version of an image, which is supported, maintained
    and verified by a dedicated team, sponsored by Docker, Inc.,
    wherever possible.
  • Secure and harden host machines through a rigorous QA process.
  • Scan container images for vulnerabilities.

When dealing with intellectual property, or applications which handle
sensitive information, it may be wise to investigate using a private
repository for your images instead of a public repository like Docker
Hub, or similar. Amazon Web Services provides information on setting up
an Amazon EC2 Container Registry (Amazon ECR)
here,
and DigitalOcean provides the instructions (albeit a few years old) for
creating a private repository on Ubuntu
here.

Securing Container Secrets

For the Docker community recently, the subject of securing credentials,
such as database passwords, SSH keys, and API tokens has been at the
forefront. One solution to the issue is the implementation of a secure
store, such as HashiCorp Vault or Square Keywhiz. These stores all
provide a virtual file system to the application, which maintain the
integrity of secure keys and passwords.

Security Requires an Upfront Plan, and Constant Vigilance

Any security plan worth implementing needs to have two parts. The first
involves the comprehensive identification and mitigation of potential
threats and vulnerabilities to the system. The second is a commitment to
constant evaluation of the environment, including regular testing and
vulnerability scans, and monitoring of production systems. Together with
your security plan, you need to identify the methods by which you will
monitor your system, including the automation of alerts to be triggered
when system resources exceed predetermined limits and when non-standard
behavior is being exhibited by the containers and their underlying
hosts. Mike Mackrory is a Global citizen who has settled down in the
Pacific Northwest – for now. By day he works as a Senior Engineer on a
Quality Engineering team and by night he writes, consults on several web
based projects and runs a marginally successful eBay sticker business.
When he’s not tapping on the keys, he can be found hiking, fishing and
exploring both the urban and the rural landscape with his kids.

Tags: ,, Category: Non classé Comments closed

Moving Containers to Production – A Short Checklist

mardi, 10 janvier, 2017

containers to production checklistIf
you’re anything like me, you’ve been watching the increasing growth of
container-based solutions with considerable interest, and you’ve
probably been experimenting with a couple of ideas. At some point in the
future, perhaps you’d like to take those experiments and actually put
them out there for people to use. Why wait? It’s a new year, and there
is no time like the present to take some action on that goal.
Experimenting is great, and you learn a great deal, but often in the
midst of trying out new things, hacking different technologies together
and making it all work, things get introduced into our code which
probably shouldn’t be put into a production environment. Sometimes,
having a checklist to follow when we’re excited and nervous about
deploying new applications out into the wild can help ensure that we
don’t do things we shouldn’t. Consider this article as the start of a
checklist to ready your Docker applications for prime time.

Item 1: Check Your Sources

Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Years ago, I worked on a software project with a
fairly large team. We started running into a problem—Once a week, at 2
PM on a Tuesday afternoon, our build would start failing. At first we
blamed the last guy to check his code in, but then it mysteriously
started working before he could identify and check-in a fix. And then
the next week it happened again. It took a little research, but we
traced the source of the failure to a dependency in the project which
had been set to always pull the latest snapshot release from the vendor,
and it turned out that the vendor had a habit of releasing a new, albeit
buggy version of their library around 2 PM on Tuesday afternoons. Using
the latest and greatest versions of a library or a base image can be fun
in an experiment, but it’s risky when you’re relying on it in a
production environment. Scan through your Docker configuration files,
and check for two things.

First, ensure that you have your source images tied to a stable
version of the image. Any occurrence of :latest in your Docker
configuration files should fail the smell test.

Second, if you are using Dockerhub as your image repository, use the
official image wherever possible. Among the reasons for doing this:
“These repositories have clear documentation, promote best practices,
and are designed for the most common use case.” ([Official Repositories
on Docker
Hub]
[)
]

Item 2: Keep your Secrets…Secret

As Gandalf asked, “Is it secret? Is it safe?” Our applications have a
need for secret information. Most applications have a need for a
combination of database credentials, API tokens, SSH keys and other
necessary information which is not appropriate, or advisable for a
public audience. Secret storage is one of the biggest weaknesses of
container technology. Some solutions which have been implemented, but
are not recommended are:

Baking the secrets into the image. Anyone with access to the
registry can potentially access the secrets, and if you have to update
them, this can be a rather tedious process.

Using volume mounts. Unfortunately, this keeps all of your secrets
in a single and static location, and usually requires them to be stored
in plain text.

Using environment variables. These are easily accessible by all
processes using the image, and are usually easily viewed with Docker
inspect.

Encrypted solutions. Secrets are stored in an encrypted state, with
decryption keys on the host machines. While your passwords and other key
data elements aren’t stored in plain text, they are fairly easy to
locate, and the decryption methods identified.

The best solution at this point is to use a secrets store, such as
Vaultby HashiCorp or
Keywhiz from Square. Implementation
is typically API-based and very reliable. Once implemented, a secret
store provides a virtual filesystem to an application, which it can use
to access secured information. Each store provides documentation on how
to set up, test and deploy a secret store for your application.

Item 3: Secure the Perimeter

A compelling reason for the adoption of a container-based solution is
the ability to share resources on a host machine. What we gain in ease
of access to the host machine’s resources, however, we lose in the
ability to separate the processes from a single container from those of
another. Great care needs to be taken to ensure that the user under
which a containers application is started has the minimum required
privileges on the underlying system. In addition, it is important that
we establish a secure platform on which to launch our containers. We
must ensure that the environment is protected wherever possible from the
threat of external influences. Admittedly this has less to do with the
containers themselves, and more with the environment into which they are
deployed, but it is important nonetheless.

Item 4: Make Sure to Keep an Eye on Things

The final item on this initial checklist for production-readying your
application is to come up with a monitoring solution. Along with secret
management, monitoring is an area related to container-based
applications which is still actively evolving. When you’re experimenting
with an application, you typically don’t run it under much significant
load, or in a multiple-user environment. Additionally, for some reason,
our users insist on finding new and innovative ways to leverage the
solutions we provide, which is both a blessing and a curse. This article
[Comparing monitoring options for Docker
deployments]

provides information and comparison between a number of monitoring
options, as does a more recent online meetup on the topic.
The landscape for Docker monitoring solutions is still under continued
development.

Go Forth and Containerize in an Informed Manner

The container revolution is without a doubt one of the most exciting and
disruptive developments in the world of software development in recent
years. Docker is the tool which all the cool kids are using, and let’s
be honest, we all want to be part of that group. When you’re ready to
take your project from an experimental phase into production, make sure
you’re proceeding in an informed manner. The technology is rapidly
evolving, and offers many advantages over traditional technologies, but
be sure that you do your due diligence and confirm that you’re using the
right tool for the right job. Mike Mackrory is a Global citizen who
has settled down in the Pacific Northwest – for now. By day he works as
a Senior Engineer on a Quality Engineering team and by night he writes,
consults on several web based projects and runs a marginally successful
eBay sticker business. When he’s not tapping on the keys, he can be
found hiking, fishing and exploring both the urban and the rural
landscape with his kids.

Tags: Category: Non classé Comments closed