Introducing Containers into Your DevOps Processes: Five Considerations

Wednesday, 15 February, 2017

Docker
has been a source of excitement and experimentation among developers
since March 2013, when it was released into the world as an open source
project. As the platform has become more stable and achieved increased
acceptance from development teams, a conversation about when and how to
move from experimentation to the introduction of containers into a
continuous integration environment is inevitable. What form that
conversation takes will depend on the players involved and the risk to
the organization. What follows are five important considerations which
should be included in that discussion.

Define the Container Support Infrastructure

When you only have a developer or two experimenting with containers, the
creation and storage of Docker images on local development workstations
is to be expected, and the stakes aren’t high. When the decision is made
to use containers in a production environment, however, important
decisions need to be made surrounding the creation and storage of Docker
images. Before embarking on any kind of production deployment journey,
ask and answer the following questions:

  • What process will be followed when creating new images?

    • How will we ensure that images used are up-to-date and secure?
    • Who will be responsible for ensuring images are kept current,
      and that security updates are applied regularly?
  • Where will our Docker images be stored?

    • Will they be publicly accessible on DockerHub?
    • Do they need to be kept in a private repository? If so, where
      will this be hosted?
  • How will we handle the storage of secrets on each Docker image? This
    will include, but is not limited to:

    • Credentials to access other system resources
    • API keys for external systems such as monitoring
  • Does our production environment need to change?

    • Can our current environment support a container-based approach
      effectively?
    • How will we manage our container deployments?
    • Will a container-based approach be cost-effective?

Don’t Short-Circuit Your Continuous Integration Pipeline

Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Perhaps one of Docker’s best features is that a
container can reasonably be expected to function in the same manner,
whether deployed on a junior developer’s laptop or on a top-of-the-line
server at a state-of-the-art data center. Therefore, development teams
may be tempted to assume that localized testing is good enough, and that
there is limited value from a full continuous integration (CI) pipeline.
What the CI pipeline provides is stability and security. By running all
code changes through an automated set of tests and assurances, the team
can develop greater confidence that changes to the code have been
thoroughly tested.

Follow a Deployment Process

In the age of DevOps and CI, we have the opportunity to deliver bug
fixes, updates and new features to customers faster and more efficiently
than ever. As developers, we live for solving problems and delivering
quality that people appreciate. It’s important, however, to define and
follow a process that ensures key steps aren’t forgotten in the thrill
of deployment. In an effort to maximize both uptime and delivery of new
functionality, the adoption of a process such as blue-green deployments
is imperative (for more information, I’d recommend Martin Fowler’s
description of Blue Green
Deployment
).
The premise as it relates to containers is to have both the old and new
containers in your production environment. Use of dynamic load balancing
to slowly and seamlessly shift production traffic from the old to the
new, whilst monitoring for potential problems, permits relatively easy
rollback should issues be observed in the new containers.

Don’t Skimp on Integration Testing

Containers may run the same, independently of the host system, but as we
move containers from one environment to another, we run the risk of
breaking our external dependencies, whether they be connections to
third-party services, databases, or simply differences in the
configuration from one environment to another. For this reason, it is
imperative that we run integration tests whenever a new version of a
container is deployed to a new environment, or when changes to an
environment may affect the interactions of the containers within.
Integration tests should be run as part of your CI process, and again as
a final step in the deployment process. If you’re using the
aforementioned blue-green deployment model, you can run integration
tests against your new containers before configuring the proxy to
include the new containers, and again once the proxy has been directed
to point to the new containers.

Ensure that Your Production Environment is Scalable

The ease with which containers can be created and destroyed is a
definite benefit of containers, until you have to manage those
containers in a production environment. Attempting to do this manually
with anything more than one or two containers would be next to
impossible. Consider this with a deployment containing multiple
different containers, scaled at different levels, and you face an
impossible task. []

When considering the inclusion of container technology as part of the DevOps
process and putting containers into production, I’m reminded of some
important life advice I received many years ago—“Don’t do dumb
things.” Container technology is amazing, and offers a great deal to
our processes and our delivery of new solutions, but it’s important that
we implement it carefully. Mike Mackrory is a Global citizen who has
settled down in the Pacific Northwest – for now. By day he works as a
Senior Engineer on a Quality Engineering team and by night he writes,
consults on several web based projects and runs a marginally successful
eBay sticker business. When he’s not tapping on the keys, he can be
found hiking, fishing and exploring both the urban and the rural
landscape with his kids.

Tags: , Category: Uncategorized Comments closed

openSUSE on Raspberry Pi 3: From Zero to Functional System in a Few Easy Steps

Wednesday, 15 February, 2017

The following article has been contributed by Dmitri Popov, Technical Writer at the SUSE Documentation team.

 

 

 

Deploying openSUSE on Raspberry Pi 3 is not all that complicated, but there are a few tricks that smooth the process.

First of all, you have several flavours to choose from. If you plan to use your Raspberry Pi 3 as a regular machine, an openSUSE version with a graphical desktop is your best option. And you can choose between several graphical environments: X11, Enlightenment, Xfce, and LXQT. There is also the JeOS version of openSUSE which provides a bare-bones system ideal for transforming a Raspberry Pi 3 into a headless server. Better still, you can choose between the Leap and Tumbleweed versions of openSUSE.

The first order of business is to download the desired openSUSE image from https://en.opensuse.org/HCL:Raspberry_Pi3. Next, you need to create a bootable microSD card. While you can write the downloaded image to a microSD card using command-line tools, Etcher makes the process more enjoyable and safe. Grab the utility from the project’s website, extract the downloaded .zip file and make the resulting .AppImage file executable using the command:

chmod +x Etcher-x.x.x-linux-x64.AppImage

Plug then a microSD card into your machine, launch Etcher by double-clicking on it, select the downloaded .raw.xz image file, and press Flash!. Connect a display and keyboard to the Raspberry Pi 3, insert the microSD card in it, and boot the little machine. During the first boot, openSUSE automatically expands the file system to make use of all free space on the card. At some point you’ll see the following message:

GPT data structures destroyed! You may now partition the disk using
fdisk or other utilities

There is no need to panic, though. Wait a minute or two, and openSUSE will continue to boot normally. When prompted, log in using the default root user name and linux password.

If you choose to deploy JeOS on your Raspberry Pi 3, keep in mind that you won’t see any output in the screen during first boot. This means that the screen will remain blank until the system finishes expanding the file system. While you can configure kernel parameters to show output, it’s probably not worth the hassle. Just wait till you see the command-line prompt.

Since openSUSE comes with SSH enabled and configured, you can boot the Raspberry Pi without a display. In this case, you need to connect the Raspberry Pi to your network via Ethernet. Just give the Raspberry Pi enough time to boot and expand the system, and you can then connect to it via SSH from any other machine on the same network using the ssh root@linux.local command.

By default, you log in to the system as root, and it’s a good idea to create a regular user. The all-mighty YaST configuration tool lets you do that with consummate ease. Run the yast2 command, switch to the Security and Users -> User and Group Management section, and add a new user. While you are at it, you can update the system in the System -> Online Update section. Once you’ve done that, quit YaST, reboot the Raspberry Pi, and log in as the newly created user.

That’s all fine and dandy, but there is one crucial component of the system that doesn’t work right out of the box: the wireless interface. Fortunately, this issue is easy to solve. First, install the nano text editor using the command:

sudo zypper in nano

then run:

sudo nano /etc/dracut.conf.d/raspberrypi_modules.conf

to open the raspberrypi_modules.conf file for editing. Remove sdhci_iproc in the first line and uncomment the last line. Save the changes, run the command:

mkinitrd -f

and reboot the Raspberry Pi.

Launch YaST again, switch to the System -> Network Settings section, and you should see the BCM43430 WLAN Card entry in the list of network interfaces. Select this entry and press Edit. Enable the Dynamic Address DHCP option, press Next, select the desired wireless network, and configure the required connection settings. Press Next and then OK to save the settings. Reboot the Raspberry Pi, and it should automatically connect to the specified Wi-Fi network.

And that’s it ?!

Containers: Making Infrastructure as Code Easier

Tuesday, 31 January, 2017

Containers and Infrastructure as
CodeWhat
do Docker containers have to do with Infrastructure as Code (IaC)? In a
word, everything. Let me explain. When you compare monolithic
applications to microservices, there are a number of trade-offs. On the
one hand, moving from a monolithic model to a microservices model allows
the processing to be separated into distinct units of work. This lets
developers focus on a single function at a time, and facilitates testing
and scalability. On the other hand, by dividing everything out into
separate services, you have to manage the infrastructure for each
service instead of just managing the infrastructure around a single
deployable unit. Infrastructure as Code was born as a solution to this
challenge. Container technology has been around for some time, and it
has been implemented in various forms and withvarying degrees of
success, starting with chroot in the early 1980s and taking the form of
products such as Virtuozzo and Sysjail since
then. It wasn’t until Docker burst onto the scene in 2013 that all the
pieces came together for a revolution affecting how applications can be
developed, tested and deployed in a containerized model. Together with
the practice of Infrastructure as Code, Docker containers represent one
of the most profoundly disruptive and innovative changes to the process
of how we develop and release software today.

What is Infrastructure as Code?

Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Before we delve into Infrastructure as Code and how
it relates to containers, let’s first look at exactly what we mean when
we talk about IaC. IaC refers to the practice of scripting the
provisioning of hardware and operating system requirements concurrently
with the development of the application itself. Typically, these scripts
are managed in a similar manner to the software code base, including
version control and automated testing. When properly implemented, the
need for an administrator to log into a new machine and configure it
manually is replaced by scripts which describe the ideal state of the
new machine, and execute the necessary steps in order to configure the
machine to realize that state.

Key Benefits Realized in Infrastructure as Code

IaC seeks to relieve the most common pain points with system
configuration, especially the fact that configuring a new environment
can take a significant amount of time. Each environment needs to be
configured individually, and when something goes wrong, it can often
require starting the process all over again. IaC eliminates these pain
points, and offers the following additional benefits to developers and
operational staff:

  1. Relatively easy reuse of common scripts.
  2. Automation of the entire provisioning process, including being able
    to provision hardware as part of a continuous delivery process.
  3. Version control, allowing newer configurations to be tested and
    rolled back as necessary.
  4. Peer review and hardening of scripts. Rather than manual
    configuration from documentation or memory, scripts can be reviewed,
    updated and continually improved.
  5. Documentation is automatic, in that it is essentially the scripts
    themselves.
  6. Processes are able to be tested.

Taking Infrastructure as Code to a Better Place with Containers

As developers, I think we’re all familiar with some variant of, “I don’t
know mate, it works on my machine!” At best, it’s mildly amusing to
utter, and at worst it represents one of the key frustrations we deal
with on a daily basis. Not only does the Docker revolution effectively
eliminate this concern, it also brings IaC into the development process
as a core component. To better illustrate this, let’s consider a
Dockerized web application with a simple UI. The application would have
a Dockerfile similar to the one shown below, specifying the
configuration of the container which will contain the application.

FROM ubuntu:12.04

# Install dependencies
RUN apt-get update -y && apt-get install -y git curl apache2 php5 libapache2-mod-php5 php5-mcrypt php5-mysql

# Install app
RUN rm -rf /var/www/*
ADD src /var/www

# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

If you’re familiar with Docker, this is a fairly typical and simple
Dockerfile, and you should already know what it does. If you’re not
familiar with the Dockerfile, understand that this file will be used to
create a Docker image, which is essentially a template that will be used
to create a container. When the Docker container is created, the image
will be used to build the container, and a self-contained application
will be created. It will be available for use on whatever machine it is
instantiated on, from developer workstation to high-availability cloud
cluster. Let’s look at a couple of key elements of the file, and explore
what they accomplish in the process.

FROM ubuntu:12.04

This line pulls in an Ubuntu Docker image from Docker Hub to use as the
base for your new container. Docker Hub is the primary online repository
of Docker images. If you visit Docker Hub and search for this image,
you’ll be taken to the repository for
Ubuntu
. The image is an
official image, which means that it is one of a library of images
managed by a dedicated team sponsored by Docker. The beauty of using
this image is that when something goes wrong with your underlying
technology, there is a good chance that someone has already developed
the fix and implemented it, and all you would need to do is update your
Dockerfile to reference the new version, rebuild your image, and test
and deploy your containers again. The remaining lines in the Dockerfile
install various packages on the base image using apt-get. Add the source
of your application to the /var/www directory, configure Apache, and
then set the exposed port for the container to port 80. Finally, the CMD
command is run when the container is brought up, and this will initiate
the Apache server and open it for http requests. That’s Infrastructure
as Code in its simplest form. That’s all there is to it. At this point,
assuming you have Docker installed and running on your workstation, you
could execute the following command from the directory in which the
Dockerfile resides.

$ docker build -t my_demo_application:v0.1

Docker will build your image for you, naming it my_demo_application
and tagging it with v0.1, which is essentially a version number. With
the image created, you could now take that image and create a container
from it with the following command.

$ docker run -d my_demo_application:v0.1

And just like that, you’ll have your application running on your local
machine, or on whatever hardware you choose to run it.

Taking Infrastructure as Code to a Better Place with Docker Containers and Rancher

A single file, checked in with your source code that specifies an
environment, configuration, and access for your application. In its
purest form, that is Docker and Infrastructure as Code. With that basic
building block in place, you can use docker-compose to define composite
applications with multiple services, each containing an individualized
Dockerfile, or an imported image for a Docker repository. For further
reading on this topic, and tips on implementation, check out Rancher’s
documentation on infrastructure
services
and
environment
templates
.
You can also read up on Rancher
Compose
,
which lets you define applications for multiple hosts. Mike Mackrory
is a Global citizen who has settled down in the Pacific Northwest – for
now. By day he works as a Senior Engineer on a Quality Engineering team
and by night he writes, consults on several web based projects and runs
a marginally successful eBay sticker business. When he’s not tapping on
the keys, he can be found hiking, fishing and exploring both the urban
and the rural landscape with his kids.

Tags: ,, Category: Uncategorized Comments closed

Security for your Container Environment

Thursday, 26 January, 2017

As
one of the most disruptive technologies in recent years, container-based
applications are rapidly gaining traction as a platform on which to
launch applications. But as with any new technology, the security of
containers in all stages of the software lifecycle must be our highest
priority. This post seeks to identify some of the inherent security
challenges you’ll encounter with a container environment, and suggests
base elements for a docker security plan to mitigate those vulnerabilities.

Benefits of a Container Environment and the Vulnerabilities They Expose

Before we investigate what aspects of your container infrastructure will
need to be covered by your security plan, it would be wise to identify
what potential security problems running applications in such an
environment will present. The easiest way to do this is to contrast a
typical virtual machine (VM) environment with that in use for a typical
container-based architecture. In a traditional VM environment, each
instance functions as an isolated unit. One of the downsides to this
approach is that each unit needs to have its own operating system
installed, and there is a cost both in terms of resources and initiation
time that needs to be incurred when starting a new instance.
Additionally, resources are dedicated to each VM, and might not be
available for use by other VMs running on the same base machine.
Rancher Free Ebook 'Comparing Kubernetes, Mesos and Swarm'
Free eBook: Compare architecture, usability, and feature sets for
Kubernetes, Mesos/Marathon, and Docker Swarm In a
container-based environment, each container comprises a bare minimum of
supporting functionality. There is no need to virtualize an
entireoperating system within each container and resource use is shared
between all containers on a device. The overwhelming benefit to this
approach is that initiation time is minimized, and resource usage is
generally more efficient. The downside is a significant loss in
isolation between containers, relative to the isolation that exists in a
VM environment, and this brings with it a number of security
vulnerabilities.

Identifying Vulnerabilities

Let’s identify some of the vulnerabilities that we inherit by virtue of
the container environment, and then explore ways to mitigate these, and
thus create a more secure environment in which to deploy and maintain
your containers.

  • Shared resources on the underlying infrastructure expose the risk of
    attack if the integrity of the container is compromised.

    • Access to the shared kernel key ring means that the user running
      the container has the same access within the kernel across all
      containers.
    • Denial of Service is possible if a container is able to access
      all resources in the underlying infrastructure.
    • Kernel modules are accessible by all containers and the kernel.
  • Exposing a port on a container opens it to all traffic by default.
  • Docker Hub and other public facing image repositories are “public.”
  • Compromised container secrets

Addressing the Problems of Shared Resources

Earlier versions of the Docker machine, especially those prior to
version 1.0, contained a vulnerability that allowed a user breakout of
the container and into the kernel of the host machine. Exploiting this
vulnerability when the container was running as the root user exposed
all kernel functionality to the person exploiting it. While this
vulnerability has been patched since version 1.0, it is still
inadvisable to run a container with a user who has anything more than
the minimum required privileges. If you are running containers with
access to sensitive information, it is also recommended that you
segregate different containers onto different virtual machines, with
additional security measures applied to the virtual machines as
well—although at this point, it may be worth considering whether using
containers to serve your application is the best approach. An additional
precaution you may want to consider is to install additional security
measure on the virtual machine, such as
SecComp
or other kernel security features. Finally, tuning the capabilities
available to containers using the *cap-add*and cap-drop flags when the
container is created can further protect your host machine from
unauthorized access.

Limiting Port Access Through Custom IPTables Rules

When configuring a Docker image, your Dockerfile might include a line
similar to “EXPOSE 80”—which opens port 80 for traffic into the
container by default. Depending on the access you are expecting or
allowing into your container, it may be advantageous to add rules to the
iptables on the image to restrict access on this port. The exact
commands may vary depending on the base container and rules you would
like to enforce, so it would be best to work with operations personnel
in implementing these rules.

Avoiding the Dangers Inherent with a Public Image Repository

As a repository for images, Docker Hub is an extremely valuable
resource. Docker Hub is also publically accessible, and harnesses the
power of the global community in the development and maintenance of
images. But it’s also publicly accessible, which introduces additional
risks alongside the benefits. If your container strategy involves usage
of images from Docker Hub or another public repository, it’s imperative
that you and your developers:

  • Know where the images came from and verify that you’re getting the
    image you expect.
  • Always specify a tag in your FROM statement; make it specific to a
    stable version of the image, and not “:latest”
  • Use the official version of an image, which is supported, maintained
    and verified by a dedicated team, sponsored by Docker, Inc.,
    wherever possible.
  • Secure and harden host machines through a rigorous QA process.
  • Scan container images for vulnerabilities.

When dealing with intellectual property, or applications which handle
sensitive information, it may be wise to investigate using a private
repository for your images instead of a public repository like Docker
Hub, or similar. Amazon Web Services provides information on setting up
an Amazon EC2 Container Registry (Amazon ECR)
here,
and DigitalOcean provides the instructions (albeit a few years old) for
creating a private repository on Ubuntu
here.

Securing Container Secrets

For the Docker community recently, the subject of securing credentials,
such as database passwords, SSH keys, and API tokens has been at the
forefront. One solution to the issue is the implementation of a secure
store, such as HashiCorp Vault or Square Keywhiz. These stores all
provide a virtual file system to the application, which maintain the
integrity of secure keys and passwords.

Security Requires an Upfront Plan, and Constant Vigilance

Any security plan worth implementing needs to have two parts. The first
involves the comprehensive identification and mitigation of potential
threats and vulnerabilities to the system. The second is a commitment to
constant evaluation of the environment, including regular testing and
vulnerability scans, and monitoring of production systems. Together with
your security plan, you need to identify the methods by which you will
monitor your system, including the automation of alerts to be triggered
when system resources exceed predetermined limits and when non-standard
behavior is being exhibited by the containers and their underlying
hosts. Mike Mackrory is a Global citizen who has settled down in the
Pacific Northwest – for now. By day he works as a Senior Engineer on a
Quality Engineering team and by night he writes, consults on several web
based projects and runs a marginally successful eBay sticker business.
When he’s not tapping on the keys, he can be found hiking, fishing and
exploring both the urban and the rural landscape with his kids.

Tags: ,, Category: Uncategorized Comments closed

Is Open Source open to women? Three amazing women offer insight at SUSECON

Wednesday, 25 January, 2017

For the first time at SUSECON, women took center stage in our keynotes – and not just one, but THREE amazing, inspiring women from IBM, Intel and Fujitsu respectively.

It’s not that we’re not Open to women joining us on stage (of course we are, we’re the Open Open Source company) – it’s just that we haven’t had the opportunity before.

So, as a woman working for (in my humble opinion) one of the greatest Open Source companies, I couldn’t resist the opportunity to ask these fantastic ladies about their experiences and whether they’ve had any challenges rising to the top of their careers. I also talked to them about how their companies support and offer programs to promote more women in IT and why Open Source is actually a great place for women.

Up first, was Dr. Figen Ulgen of Intel. Figen is the GM of the High Performance Computing Platform Software & Cloud Group and leads a global organization that develops and promotes HPC platform software technologies for the HPC ecosystem. We talked about Figen’s role at Intel, the importance of women in IT and why Open Source is a great place for women, ‘Open Source is a fantastic place for women to showcase what they have – there’s no hierarchy, you’re only bound by your ability to contribute, and your intelligence and your knowledge base…’ Figen also told me about some of the initiatives that are being driven by Intel to ensure women are proportionally represented.

Next, I had a quick Q&A with Kathy Bennett of IBM. Kathy is the VP of ISV Technical Enablement in the IBM Systems Unit. She’s responsible for technical enablement of commercial ISVs and open source software and leads a global team of Senior Architects and SW Engineers whose focus is delivering optimized ISV and Open Source SW solutions on IBM Systems. Kathy briefly chatted about what influenced her decision to embark on a career in IT and why she thought it important to encourage more women into the field, ‘Women bring a different perspective…and whenever you bring new perspectives into any industry there’s value’.

https://youtu.be/E70Ut3QDISY

Lastly, we caught up with Katsue Tanaka of Fujitsu. Katsue is the SVP of the Platform Software Business Unit and has been leading the group since April 2016. Her responsibilities include business, strategy and development of platform software for Mission Critical systems as well as for on premise, private and public cloud. Under her organization she manages 4 subsidiary companies which develop and provide solutions based on advanced open source software technologies. Katsue hardly speaks any English so it astounded us when she spoke with Michael Miller in English, on stage at SUSECON. And she did it with such grace that the audience loved it. During our interview, Katsue talks about the beginning of her career as a software engineer and provides some personal insight into the challenges women in IT can face and ways to overcome them. She also gives some great career advice that I think works for anyone ‘imagine what you want to be in ten, twenty years from now. Then work backwards from that image of your future self’. Unfortunately, I don’t speak Japanese so we added a version with subtitles for others like me.

https://youtu.be/uOkayO8NiNo

At SUSE, we pride ourselves on being the open “OPEN” source provider and that philosophy doesn’t just apply to software and technology.  We are equally open when hiring and are happy to have partners that share the same sentiment.  If you would like to hear more from these amazing women, be sure to watch their keynote features from SUSECON 2016 below. Also check out our openings at suse.com/jobs.

women-in-open-source

Dr.Figen Ulgen at SUSECON 2016

Kathy Bennett at SUSECON 2016

Katsue Tanaka at SUSECON 2016

 

Join us at a North American SUSE Expert Day!

Tuesday, 24 January, 2017

When I started at SUSE – the first thing I got to work on with the rest of the North American marketers was the 2016 series of SUSE Expert Days. It was the first chance I had (in San Francisco – National Football League Super Bowl week, no less). As I landed, I learned our boxes of give-aways and signage hadn’t arrived at our venue (they didn’t until it was too late the next day). The morning we were hosting the event, it was raining. It turned out the venue was directly across the street from this Super Bowl “festival” making arriving and parking a real pain. We had less customers than we expected from the large registration and we started late waiting for them. But guess what? It was one of the BEST days of my first 90 days with SUSE. Our Sales Engineers were incredibly knowledgeable, not just about the SUSE Enterprise Storage and Open Stack Cloud solutions; but more importantly – so what to customers? How do we partner to solve problems for our IT customer and prospects in San Francisco and all over the continent. They presented; they took detours based on questions and interests; they ran over time; they demonstrated solutions; and talked in small groups spontaneously allowing customers to talk to their peers. We had great food; we networked and in the end, it was what it should always be. Our customers getting what they need from us. Come rain or shine. Join us again this year! Del Mar, California is in 7 business days! See you there!

Learn more about SUSE Expert Days at suse.com/expertdays

Learning with SUSE in 2017

Tuesday, 17 January, 2017

My 8 year old son and I had an interesting conversation the other day while driving to school. While he’s a decent student, he’s not a big fan of school, but he does love to learn new things. He asked me at what point he would “know everything” and seemed shocked when I told him that the learning journey never ends. He will continue to learn new things every day, his entire life. He got quiet and I assumed that this new revelation was sinking in… Finally he looked up and said, “Mom, I’m hungry.” So… perhaps the point wasn’t as poignant to my 2nd grader as I was hoping, but it was a good reminder that there’s an ongoing opportunity for all of us to simply learn more, every day.

SUSE offers many types of learning opportunities for those of us looking to continue the learning journey. Whether you prefer webinars or in-person day-long events, SUSE has you covered. Here’s how they compare:

  • SUSE Webinars – Monthly 60 minute sessions focused on the latest technology and Industry best-practices to enhance your business and keep you up to date.
  • SUSE Stand-Ups – Monthly technical deep-dive session related to the corresponding webinar. Learn about specific technical elements around the related SUSE Product in only 20 minutes!
  • SUSE Expert Days – One-day, in-person events in over 70 cities around the globe. Experts will discuss:
    • The challenges of the digital transformation and how open source solutions built on the Linux platform can help you to rise to the challenge
    • Deploying a Software-defined Infrastructure and delivering as-a-Service with OpenStack, Kubernetes, Docker, Ceph and Cloud Foundry
    • Improving business agility using an open DevOps approach

When I look at the options listed above, I see a great opportunity to keep learning. I hope you do too. Join SUSE at one of these upcoming events. We look forward to having you.

learning

Data center issues? What you need is an expert!

Wednesday, 11 January, 2017

In case you haven’t heard, SUSE Expert Days are making their way across the globe. As the person in charge of the tour in North America, I can personally vouch for the amount of effort that goes into every aspect of these popular events, from the creation of presentations, all the way down to the selection of menu items. Several months ago I teased that we would be doing the tour differently from what we have in the past, and I’m happy to provide you with a look at what’s changed!  Check it out!

suse-expert-days-2017

First, we entered the planning for Expert Days with our customer as the primary focus. Though the deep dive conversations and technology demonstrations would benefit anyone already equipped with data center know-how, we recognize that those who benefit most from these full day events are people that have already been engaged, at some level, with SUSE representatives.  We therefore looked at a heat map to gauge exactly where this demographic is located. Outcome? We selected locations much more central to that audience. For example, you may notice that we have listed some unconventional cities for our tour stops this year. Instead of downtown Atlanta, we will be hosting in the nearby suburb of Sandy Springs. Instead of Dallas proper, we will be in Irving, TX…all in an effort to make the event more accessible to our friends in these metropolitan areas.

Next, we put a huge amount of effort into developing presentations that address the needs of our customers, maybe even before a customer recognizes it as a need! As you already know, the data center is central to the successful operation of a business forging through the 21st century, and true to form, technology is advancing at neck-breaking speed to keep up with (and stayahead of) consumer demands. Innovation is and always has been the hallmark of a successful business, and despite the fact that innovative products still hold their place, the data center is increasingly becoming the foundation that supports a successful go-to-market strategy. Just ask Apple why they developed iCloud when they already had the iPhone, or ask Amazon how their Echo is able to work as well as it does. The answer lies in the convenience, accessibility, and reliability of cloud-based data centers that are capable of delivering on-demand services.

So this year, you will find Expert Days stuffed with presentations that talk about HOW to adapt a data center to meet the needs of your business AND consumers in a way that will not limit future innovations, but actually open the flood gates to not-yet-conceived possibilities.

It’s a very exciting time in the world of the data center, thanks to the community of masterminds all over the world that contribute to open source technologies. SUSE simply refines these innovations so that they are reliable and ready for your business; SUSE also gives you the opportunity to learn how to leverage these advances from data center experts in the form of SUSE Expert Days.

With all that in mind, I formally invite you to join us to define the future of your data center with this year’s main topics:

  • The Digital Transformation
  • Software-defined Infrastructure
  • Changing Application Service Delivery with DevOps

You can learn more about SUSE Expert Days at suse.com/expertdays, and be sure to register for a city near you.

See you there!

Moving Containers to Production – A Short Checklist

Tuesday, 10 January, 2017

containers to production checklistIf
you’re anything like me, you’ve been watching the increasing growth of
container-based solutions with considerable interest, and you’ve
probably been experimenting with a couple of ideas. At some point in the
future, perhaps you’d like to take those experiments and actually put
them out there for people to use. Why wait? It’s a new year, and there
is no time like the present to take some action on that goal.
Experimenting is great, and you learn a great deal, but often in the
midst of trying out new things, hacking different technologies together
and making it all work, things get introduced into our code which
probably shouldn’t be put into a production environment. Sometimes,
having a checklist to follow when we’re excited and nervous about
deploying new applications out into the wild can help ensure that we
don’t do things we shouldn’t. Consider this article as the start of a
checklist to ready your Docker applications for prime time.

Item 1: Check Your Sources

Rancher Free Ebook 'Continuous Integration and Deployment with Docker and Rancher' Free
eBook: Continuous Integration and Deployment with Docker and
Rancher Years ago, I worked on a software project with a
fairly large team. We started running into a problem—Once a week, at 2
PM on a Tuesday afternoon, our build would start failing. At first we
blamed the last guy to check his code in, but then it mysteriously
started working before he could identify and check-in a fix. And then
the next week it happened again. It took a little research, but we
traced the source of the failure to a dependency in the project which
had been set to always pull the latest snapshot release from the vendor,
and it turned out that the vendor had a habit of releasing a new, albeit
buggy version of their library around 2 PM on Tuesday afternoons. Using
the latest and greatest versions of a library or a base image can be fun
in an experiment, but it’s risky when you’re relying on it in a
production environment. Scan through your Docker configuration files,
and check for two things.

First, ensure that you have your source images tied to a stable
version of the image. Any occurrence of :latest in your Docker
configuration files should fail the smell test.

Second, if you are using Dockerhub as your image repository, use the
official image wherever possible. Among the reasons for doing this:
“These repositories have clear documentation, promote best practices,
and are designed for the most common use case.” ([Official Repositories
on Docker
Hub]
[)
]

Item 2: Keep your Secrets…Secret

As Gandalf asked, “Is it secret? Is it safe?” Our applications have a
need for secret information. Most applications have a need for a
combination of database credentials, API tokens, SSH keys and other
necessary information which is not appropriate, or advisable for a
public audience. Secret storage is one of the biggest weaknesses of
container technology. Some solutions which have been implemented, but
are not recommended are:

Baking the secrets into the image. Anyone with access to the
registry can potentially access the secrets, and if you have to update
them, this can be a rather tedious process.

Using volume mounts. Unfortunately, this keeps all of your secrets
in a single and static location, and usually requires them to be stored
in plain text.

Using environment variables. These are easily accessible by all
processes using the image, and are usually easily viewed with Docker
inspect.

Encrypted solutions. Secrets are stored in an encrypted state, with
decryption keys on the host machines. While your passwords and other key
data elements aren’t stored in plain text, they are fairly easy to
locate, and the decryption methods identified.

The best solution at this point is to use a secrets store, such as
Vaultby HashiCorp or
Keywhiz from Square. Implementation
is typically API-based and very reliable. Once implemented, a secret
store provides a virtual filesystem to an application, which it can use
to access secured information. Each store provides documentation on how
to set up, test and deploy a secret store for your application.

Item 3: Secure the Perimeter

A compelling reason for the adoption of a container-based solution is
the ability to share resources on a host machine. What we gain in ease
of access to the host machine’s resources, however, we lose in the
ability to separate the processes from a single container from those of
another. Great care needs to be taken to ensure that the user under
which a containers application is started has the minimum required
privileges on the underlying system. In addition, it is important that
we establish a secure platform on which to launch our containers. We
must ensure that the environment is protected wherever possible from the
threat of external influences. Admittedly this has less to do with the
containers themselves, and more with the environment into which they are
deployed, but it is important nonetheless.

Item 4: Make Sure to Keep an Eye on Things

The final item on this initial checklist for production-readying your
application is to come up with a monitoring solution. Along with secret
management, monitoring is an area related to container-based
applications which is still actively evolving. When you’re experimenting
with an application, you typically don’t run it under much significant
load, or in a multiple-user environment. Additionally, for some reason,
our users insist on finding new and innovative ways to leverage the
solutions we provide, which is both a blessing and a curse. This article
[Comparing monitoring options for Docker
deployments]

provides information and comparison between a number of monitoring
options, as does a more recent online meetup on the topic.
The landscape for Docker monitoring solutions is still under continued
development.

Go Forth and Containerize in an Informed Manner

The container revolution is without a doubt one of the most exciting and
disruptive developments in the world of software development in recent
years. Docker is the tool which all the cool kids are using, and let’s
be honest, we all want to be part of that group. When you’re ready to
take your project from an experimental phase into production, make sure
you’re proceeding in an informed manner. The technology is rapidly
evolving, and offers many advantages over traditional technologies, but
be sure that you do your due diligence and confirm that you’re using the
right tool for the right job. Mike Mackrory is a Global citizen who
has settled down in the Pacific Northwest – for now. By day he works as
a Senior Engineer on a Quality Engineering team and by night he writes,
consults on several web based projects and runs a marginally successful
eBay sticker business. When he’s not tapping on the keys, he can be
found hiking, fishing and exploring both the urban and the rural
landscape with his kids.

Tags: Category: Uncategorized Comments closed