What? A dozen analysts from 5 different firms all agree? No way!

Friday, 28 October, 2016

I didn’t think it would ever be possible for a dozen analysts to ever agree to anything let alone 12 analysts from 5 different firms. At the OpenStack Summit in Barcelona over the past 3 days we met with 12 analysts from Gartner, Forrester, IDC, Ovum, and ESG, we spent a few minutes talking about all the momentum of OpenStack in general then discussed SUSE’s momentum and SUSE OpenStack7.  We were met with varying degrees of agreement on most topics. However the one thing they all agreed to is that of nearly a hundred vendors trying to make a business out of OpenStack only two or three have the business model for long term sustainability and success in delivering an enterprise level subscription.

There is a big difference in the business model of proprietary software vendors and open source software vendors, and I have yet to see a proprietary vendor have broad appeal to customers when they jump into the open source business and try to shift sales, support, engineering, and other infrastructure of the company to fit an open source model. These factors inhibit bigger software or hardware vendors from succeeding. So, that narrows the field down very quickly to Red Hat, and SUSE. Both have been in business with an open source business model for decades and both have experience delivering open source subscriptions to large enterprises.

SUSE prides itself on being an Open, open source company and Red Hat prides itself on being big enough to be everything to everyone.

Highlights from OpenStack Summit Barcelona

Thursday, 27 October, 2016

What a great week at the OpenStack Summit this past week in Barcelona! Fantastic keynotes, great sessions, and excellent hallway conversations.  It was great to meet a number of new Stackers as well as rekindle old friendships from back when OpenStack kicked off in 2010.

A few items of note from my perspective:

OpenStack Foundation Board of Directors Meeting

OpenStack Board In Session

As I mentioned in my last blog, it is the right of every OpenStack member to attend / listen in on each board meeting that the OpenStack Foundation Board of Directors holds.  I made sure to head out on Monday and attend most of the day.  There was a packed agenda so here a few highlights:

  • Interesting discussion around the User Committee project that board member Edgar Magana is working toward, with discussion on its composition, whether members should be elected, and if bylaw changes are warranted.  It was a deep topic that required further time, so the topic was deferred to a later discussion with work to be done to map out the details. This is an important endeavor for the community in my opinion – I will be keeping an eye on how this progresses.
  • A number of strong presentations by prospective gold members were delivered as they made their cases to be added to that tier. I was especially happy to see a number of Chinese companies presenting and making their case.  China is a fantastic growth opportunity for the OpenStack projecct, and it was encouraging to see players in that market discuss all they are doing for OpenStack in the region.  Ultimately, we saw City Network, Deutsche Telekom, 99Cloud and China Mobile all get voted in as Gold members.
  • Lauren Sell (VP of Marketing for the Foundation) spoke on a visionary model to where her team is investigating how our community can engage with other projects in terms of user events and conferences.  Kubernetes, Ceph, and other projects were named as examples.  This is a great indicator of how we’ve evolved, as it highlights that often multiple projects are needed to address actual business challenges.  A strong indicator of maturity for the community.

Two Major SUSE OpenStack Announcements

SUSE advancements in enterprise ready OpenStack made its way to the Summit in a big way this week.

  1. SUSE OpenStack Cloud 7:  While we are very proud to be one of the first vendors to provide an enterprise-grade Newton based OpenStack distribution, this release also offers features like new Container-as-a-Service capabilities and non-disruptive upgrade capabilities.Wait, non-disruptive upgrade?  As in, no downtime?  And no service interruptions?That’s right – disruption to service is a big no-no in the enterprise IT world, and now SUSE OpenStack Cloud 7 provides you the direct ability to stay live during OpenStack upgrade.
  2. Even more reason to become a COA.  All the buzz around the Foundation’s “Certified OpenStack Administrator” exam got even better this week when SUSE announced that the exam would now feature the SUSE platform as an option.And BIG bonus win – if you pass the COA using the SUSE platform, you will be granted
    1. the Foundation’s COA certification
    2. SUSE Certified Administrator in OpenStack Cloud certificatio

That’s two certifications with one exam.  (Be sure to specify the SUSE platform when taking the exam to take advantage of this option.)

There’s much more to these critical announcements so take a deeper look into them with these blogs by Pete Chadwick and Mark Smith.  Worth a read.

 Further Enabling the Enterprise

As you know, enterprise adoption of OpenStack is a major passion of mine – I’ve captured a couple more signs I saw this week of OpenStack continuing to head in the right direction.

  • Progress in Security. On stage this week, OpenStack was awarded the CII, the Core Infrastructure Initiative Best Practices badge.  The CII is a project out of the Linux Foundation project that validates open source projects, specific for security, quality and stability. By winning this award, OpenStack is now validated by a trusted third party and is 100% compliant.  Security FTW!
  • Workload-based Sample Configs.  This stable of assets has been building for some time, but OpenStack.org now boasts a number of reference architectures addressing some of the most critical workloads.  From web apps to HPC to video processing and more, there are great resources on how to get optimize OpenStack for these workloads.  (Being a big data fan, I was particularly happy with the big data resources here.)

I’d be interested in hearing what you saw as highlights as well – feel free to leave your thoughts in the comments section.

OK, time to get home, get rested – and do this all over again in 6 months in Boston.

(If you missed the event this time, OpenStack.org has you covered – start here to check out a number of videos from the event, and go from there.)

Until next time,

JOSEPH
@jbgeorge

Boston in 2017!

Eye-opening exchanges at SAP TechEd / Las Vegas

Wednesday, 26 October, 2016

sap_teched_lasvegas_image

Despite the oddly low ceiling in the exhibits hall this year, SAP TechEd rose to new heights for this attendee & sponsor.

The range of topics covered by the presenters, from HANA implementation strategies and success stories to product roadmaps to tools for developers was extensive and impressive.  And apparently some of SAP’s new tools were left in the bag as Oracle OpenWorld was being held the same week and SAP opted to announce some of their news at Barcelona in November instead.

We at SUSE had a fairly modest booth (choosing, wisely I believe, to spend larger sums on product development and customer support), but it was certainly busy.  Some observations from a number of conversations with our current customers, new prospects and partners:

  • SAP is doing a lot of things right…the additional leeway being given for customers who will move to HANA as their data platform is largely acceptable to most of the people with whom we spoke.  2025 is far enough out to allow enterprises to plan and execute a deliberate strategy.
  • The focus on HANA as an enabling technology for much more than Business Suite or S/4HANA seems to be enhancing its appeal. SAP Vora garnered lots of attention, as enterprises continue to map out what they’re going to do with all that IoT data. Our collaboration with the SAP HANA express edition team was well-received (SUSE Linux is the default OS for this developer-focused & free version of HANA).
  • Our systems partners tell us that many of their customers are planning large-scale migrations.  One told us about a large 30-system HANA deployment – that’s hundreds of TBs –  that was about to close, and others disclosed similar news regarding their engagements.
  • Our cloud system provider (CSP) partners had other encouraging news. They seem to be on the verge of a groundswell of SAP solution adoption in the public cloud environments that they engineer and offer.  Their customers and prospects seem eager to skip the cycle of architecting a complex set of gear, procuring it, integrating it and testing it for weeks and months before they deploy.  A fair number have already voted with their enterprise computing dollars, euros, pounds, yen and yuan.
  • SUSE’s focus on “Zero Downtime” appears to be just the right message at just the right time – customers considering a next generation infrastructure either want to replicate the redundancy and failover strategies they’ve employed in the past on proprietary Unix systems – or take them a step further. SUSE’s support for a variety of failover scenarios, and our advances in the area of live kernel patching seem to be right on the mark.

If you had a chance to visit us in either Las Vegas or Bangalore, thank you.  If you haven’t yet, and you’ll be at SAP TechEd in Barcelona in November, please make it a point to see the SUSE booth and share your thoughts, challenge us with your systems/infrastructure questions, or simply drop by to say “Hello” (or “Hola!” or “Guten Tag”) and grab a green hat or a Geeko plushie.  By the way, our venerable Geeko mascot is actually a chameleon, I’ve been informed.

Adapt to Win: Top SUSE Linux Enterprise Server Sessions at SUSECON

Monday, 24 October, 2016

shutterstock_230652067

Are you ready for the software defined future?  SUSECON 2016 starts Monday, November 7th in Washington, D.C. and one of our main topics this year is Business Agility.  When IT is transformed into a strategic business asset, the constantly evolving IT organization becomes more agile. SUSE solutions help organizations improve business agility using the latest open source technologies and allows them to efficiently respond to changing business needs, intelligently sense and respond to infrastructure demands and adapt to new technologies.  If you’re interested in learning more about SUSE Linux Enterprise Server and how it allows businesses to adapt to win, be sure to add these sessions to your agenda.

TUT91207 – SLES for SAP Applications and HANA IoT: a powerful combination

Alessandro Renna – Sales Engineer, SUSE

Thursday, Nov 10, 10:00 am – 11:00 am

Friday, Nov 11, 10:15 am – 11:15 am

From connected cars to smart cities, the Internet of Things (IoT) is everywhere. In this session you will learn how we built a lab to explore the capabilities and benefits of a powerful IoT platform that can help you extend and enrich your core business with data-driven intelligence.

CAS91545 – Large SAP migration to SAP HANA on IBM Power Systems with SLES for a Private Cloud

Carsten Dieterle – Senior IT Architect, IBM

Tuesday, Nov 8, 10:15 AM – 11:15 AM

In this session Carsten Dieterle, Leading SAP Solution Architect with IBM, will talk about a first-of-a-kind solution for setting up SAP HANA on IBM Power private cloud solution in the datacenter of a large customer in the automotive area. In this solution the fully-automated installation of SLES Linux LPAR’s through PowerVC as well as the unattended installation of SAP HANA (use cases SAP BW scale-up and scale-out installations or Suite on HANA) with the required storage and CPU and memory ratio are included. In addition we integrated SUSE Linux Enterprise High Availability Extension (SUSE HAE) for business critical systems to minimize downtime and automated failover to the backup datacenter.

FUT92725 – SUSE, Containers, Docker and Beyond

Michal Svec – Senior Product Manager, SUSE and Flavio Castelli – Engineering Manager, SUSE

Tuesday, Nov 8, 11:30 AM – 12:30 PM

Friday, Nov 11, 10:15 AM – 11:15 AM

In this session we will have a look at container technologies in SUSE Linux Enterprise Server, main use cases and scenarios, related tooling, what do we have today, some of exciting new updates and a plan for the future.

If you haven’t done so already be sure to register now.

Under Pressure in the Pursuit of Zero Downtime

Friday, 21 October, 2016

downtime10616

The business of data center infrastructure can often feel like carpentry or home repair, as pieces need to be monitored, replaced and modernized. So if maintaining a data center is like fixing a house, you need to choose a reliable foundation, especially for your mission-critical workloads.

Here’s where the analogy breaks down: unlike home repair, the business of data center infrastructure becomes more important each year. Forrester Research suggests that nearly 75 percent of all applications are deemed mission or business critical. The increasing number of critical systems expands potential points of failure that can seriously affect customers and employees, leading to lost revenue and higher costs. Data center managers are constantly burdened with headaches as they are under pressure to ensure continuous uptime. They must meet customer demands as well as keep costs competitive.

For perspective on just how big unplanned IT outages can be when they occur, according to Dun & Bradstreet, 59 percent of Fortune 500 companies experience a minimum of 1.6 hours of downtime per week. Relate this situation to a Fortune 500 company that has 10,000 employees receiving an average of $56 per hour, including benefits; the labor component of downtime costs for such a company would be $896,000 weekly, which translates into more than $46 million per year.

Downtime can result from both internal and external forces. Take the New York Times website malfunction in 2013, when the site went dark owing to a “server issue” caused by an “outage occurring within seconds of a scheduled maintenance update,” or Google’s brief (five minutes) downtime, which reportedly cost the company $500,000 and reduced global Internet traffic by a whopping 40 percent during that short time frame. More recently, this past year, Hess cut its production forecast after unplanned downtime occurred in the Gulf of Mexico. The company revealed it is now expecting net production of between 315,000 and 325,000 barrels of oil equivalent per day, down from its previous forecast of between 330,000 and 350,000.

You can read the rest of the article at the Datacenter Journal.

Kubernetes, Mesos, and Swarm: Comparing the Rancher Orchestration Engine Options

Thursday, 20 October, 2016

A Detailed Overview of Rancher’s Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

kubernetes_mesos_swarm

Note: You can find an updated comparison of Kubernetes vs. Docker Swarm
in a recent blog post
here.

Recent versions of Rancher have added support for several common
orchestration engines in addition to the standard Cattle. The three
newly supported engines, Swarm (soon to be Docker Native Orchestration),
Kubernetes and Mesos are the most widely used orchestration systems in
the Docker community and provide a gradient of usability versus feature
sets. Although Docker is the defacto standard for containerization,
there are no clear winners in the orchestration space. In this article,
we go over the features and characteristics of the three systems and
make recommendations of use cases where they may be suitable.

Docker Native Orchestration is fairly bare bones at the moment but is
getting new features at a rapid clip. Since it is part of the official
Docker system, it will be the default choice for many developers and
hence will have likely have good tooling and community support.
Kubernetes is among the most widely used container orchestration systems
today and has the support of Google. Lastly, Mesos with Mesosphere (or
Marathon, its open source version) takes a much more compartmentalized
approach to service managements where a lot of features are left to
independent plug-ins and applications. This makes it easier to customize
the deployment as individual parts can be swapped out or customized.
However, this also means more tinkering is required to get a working
setup. Kubernetes is more opinionated about how to build clusters and
ships with integrated systems for many common use cases.

Docker Native Orchestration

Basic Architecture

Docker Engine 1.12 shipped with Native Orchestration, which is a
replacement for stand alone Docker Swarm. The Docker native cluster
(Swarm) consists of a set of nodes (Docker Engines/ Daemons) which can
either be managers or workers. Workers run the containers you launch and
managers maintain cluster state. You can have multiple managers for
high-availability, but no more than seven are recommended. The masters
maintain consensus using an internal implementation of the the
RAFT algorithm. As with all consensus
algorithms, having more managers has a performance implication. The fact
that managers maintain consensus internally means that there are no
external dependencies for Docker native orchestration which makes
cluster management much easier.

###

Usability

Docker native uses concepts from single-node Docker and extends them to
the Swarm. If you are up to date on Docker concepts, the learning curve
is fairly gradual. The setup for a swarm is trivial once you have Docker
running on the various nodes you want to add to your swarm: you just
call docker swarm init on one node and docker swarm join on any
other nodes you want to add. You can use the same Docker Compose
templates and the same Docker CLI command set as with standalone Docker.

Feature Set

Docker native orchestration uses the same primitives as Docker Engine
and Docker Compose to support orchestrations. You can still link
services, create volumes and define expose ports. All of these
operations apply on a single node. In addition to these, there are two
new concepts, services and networks.

A docker service is a set of containers that are launched on your nodes
and a certain number of containers are kept running at all times. If one
of the the containers dies it is replaced automatically. There are two
types of services, replicated or global. Replicated services maintain a
specified number of containers across the cluster where as global
services run one instance of a container on each of your swarm nodes. To
create a replicated service use the command shown below.

docker service create          
   –name frontend              
   –replicas 5                 
   -network my-network         
   -p 80:80/tcp nginx:latest.

You can create named overlay networks using docker network
create –driver overlay NETWORK_NAME.
Using the named overlay network
you can create isolated, flat, encrypted virtual networks across your
set of nodes to launch your containers into.

You can use constraints and labels to do some very basic scheduling of
containers. Using constraints you can add an affinity to a service and
it will try to launch containers only on nodes which have the specified
labels.

docker service create                        
   –name frontend                            
   –replicas 5                               
   -network my-network                       
   --constraint engine.labels.cloud==aws     
   --constraint node.role==manager           
   -p 80:80/tcp nginx:latest.

Furthermore, you can use the reserve CPU and reserve memory flags to
define the resources consumed by each container of the service so that
when multiple services are launched on a swarm the containers can be
placed to minimize resource contention.

You can do rudimentary rolling deployments using the command below.
This will update container image for the service but do so 2 containers
at a time with a 10s interval between each set of two. However,
health-checks and automatic rollbacks are not supported.

docker service update        
   –name frontend            
   –replicas 5               
   -network my-network       
   --update-delay 10s        
   --update-parallelism 2    
   -p 80:80/tcp nginx:other-version.

Docker supports persistent external volumes using volume drivers, and
Native orchestration extends these using the mount option to service
create command. Adding the following snippet to the command above will
mount a NFS mount into your container. Note this requires NFS to be
setup on your underlying host external to docker, some of the other
drivers which add support for Amazon EBS volume drivers or Google
container engine volume drivers have the ability to work without host
support. Also this feature is not yet well documented and may require a
bit of testing creating github issues on the docker project to get
working.

    --mount type=volume,src=/path/on/host,volume-driver=local,
    dst=/path/in/container,volume-opt=type=nfs,
    volume-opt=device=192.168.1.1:/your/nfs/path

Kubernetes

Basic Architecture

Conceptually, Kubernetes is somewhat similar to Swarm in that it uses a
manager (master) node with RAFT for consensus. However, that is where
the similarities end. Kubernetes uses an external
etcd cluster for this purpose. In
addition you will need a network layer external to Kubernetes, this can
be an overlay network like flannel, weave etc. With these external tools
in place, you can launch the Kubernetes master components; API Server,
Controller Manager and Scheduler. These normally run as a Kubernetes pod
on the master node. In addition to these you would also need to run the
kubelet and kubeproxy on each node. Worker nodes only run the Kubelet
and Kubeproxy as well as a network layer provider such as flanneld if
needed.

In this setup, the kubelet will control the containers (or pods) on the
given node in conjunction with the Controller manager on the master. The
scheduler on the master takes care of resource allocation and balancing
and will help place containers on the worker node with the most
available resources. The API Controller is where your local kubectl CLI
will issue commands to the cluster. Lastly, the kubeproxy is used to
provide load balancing and high availability for services defined in
Kubernetes.

Usability

Setting up Kubernetes from scratch is a non-trivial endeavor as it
requires setting up etcd, networking plugins, DNS servers and
certificate authorities. Details of setting up Kubernetes from scratch
are available here
but luckily Rancher does all of this setup for us. We have covered how
to setup a Kubernetes cluster in an earlier
article
.

Beyond initial setup, Kubernetes still has somewhat of a steep learning
curve as it uses its own terminology and concepts. Kubernetes uses
resource types such as Pods, Deployments, Replication Controllers,
Services, Daemon sets and so on to define deployments. These concepts
are not part of the Docker lexicon and hence you will need to get
familiar with them before your start creating your first deployment. In
addition some of the nomenclature conflicts with Docker. For example,
Kubernetes services are not Docker services and are also conceptually
different (Docker services map more closely to Deployments in the
Kubernetes world). Furthermore, you interact with the cluster using
kubectl instead of the docker CLI and you must use Kubernetes
configuration files instead of docker compose files.

The fact that Kubernetes has such a detailed set of concepts independent
of core Docker is not in itself a bad thing. Kubernetes offers a much
richer feature set than core Docker. However, Docker will add more
features to compete with Kubernetes with divergent implementations and
divergent or conflicting concepts. This will almost surely repeat the
CoreOS/rkt situation with large portions of the community working on
similar but competing solutions. Today, Docker Swarm and Kubernetes
target very different use cases (Kubernetes is much more suitable for
large production deployments of service-oriented architectures with
dedicated cluster-management teams) however as Docker Native
Orchestration matures it will move into this space.

Feature Set

The full feature set of Kubernetes is much too large to cover in this
article, but we will go over some basic concepts and some interesting
differentiators. Firstly, Kubernetes uses the concept of Pods as its
basic unit of scaling instead of single containers. Each pod is a set of
containers (set may be size one) which are always launched on the same
node, share the same volumes and are assigned a Virtual IP (VIP) so they
can be addressed in the cluster. A Kubernetes spec file for a single pod
may look like the following.

kind: Pod
metadata:
  name: mywebservice
spec:
  containers:
  - name: web-1-10
    image: nginx:1.10
    ports:
    - containerPort: 80

Next you have deployments; these loosely map to what services are in
Docker Native orchestration. You can scale the deployment much like
services in Docker Native and a deployment will ensure the requite
number of containers is running. It is important to note that
deployments only analogous to replicated service in docker native as
Kubernetes uses the Daemon Set concept to support its equivalent of
globally scheduled services. Deployments also support Health checks
which use HTTP or TCP reachability or custom exec commands to determine
if a container/pod is healthy. Deployments also support rolling
deployments with automatic rollback using the health check to determine
if each pod deployment is successful.

kind: Deployment
metadata:
  name: mywebservice-deployment
spec:
  replicas: 2 # We want two pods for this deployment
  template:
    metadata:
      labels:
        app: mywebservice
    spec:
      containers:
      - name: web-1-10
        image: nginx:1.10
        ports:
        - containerPort: 80

Next you have Kubernetes Services which provide simple load balancing to
a deployment. All pods in a deployment will be registered with a service
as they come and go, and services also abstract away multiple
deployments so that if you want to run rolling deployments you will
register two Kubernetes deployments with the same service, then
gradually add pods to one while reducing pods from the other. You can
even do blue-green deployments where you point the service at a new
Kubernetes deployment in one go. Lastly, services are also useful for
service discovery within your Kubernetes cluster, all services in the
cluster get a VIP and are exposed to all pods in the cluster as docker
link style environment variables as well as through the integrated DNS
server.

In addition to basic services, Kubernetes supports
Jobs, Scheduled
Jobs
, and Pet
Sets
.
Jobs create one or more pods and wait until they terminate. A job makes
sure that the specified number of pods terminate successfully. For
example, you may start a job to start processing business intelligence
data for 1 hour in the last day. You would launch a job with 24 pods for
the previous day and once they are all run to completion the job is
done. A scheduled job as the name suggests is a job that is
automatically run, on a given schedule. In our example, we would
probably make our BI processor a daily scheduled job. Jobs are great for
issuing batch style work loads to your cluster which are not services
that always need to be up but instead tasks that need to run to
completion and then be cleaned up.

Another extension that Kubernetes provides to basic services is Pet
Sets. Pet sets support stateful service workloads that are normally very
difficult to containerize. This includes databases and real-time
connected applications. Pet sets provide stable hostnames for each
“pet” in the set. Pets are indexed; for example, pet5 will be
addressable independently of pet3, and if the 3rd pet container/pod dies
it will be relaunched on a new host with the same index and hostname.

Pet Sets also provide stable storage using persistent
volumes
, i.e
if pet1 dies and is relaunched on another node it will get its volumes
remounted with the original data. Furthermore you can also use NFS or
other network file systems to share volumes between containers, even if
they are launched on different hosts. This addressed one of the most
problematic issues when transitioning from single-host to distributed
docker environments.

Pet sets also provide peer-discovery, with normal services you can
discover other services (through Docker linking etc) however,
discovering other container within a service is not possible. This makes
gossip protocol based services such as Cassandra and Zookeeper very
difficult to launch.

Lastly, Pet Sets provide startup and tear down ordering which is
essential for persistent, scalable services such as Cassandra. Cassandra
relies on a set of seed nodes, and when you scale your service up and
down you must ensure the seed nodes are the first ones to be launched
and the last to be torn down. At the time of writing of this article,
Pet Sets are one of the big differentiators for Kubernetes, as
persistent stateful workloads are almost impossible to run at production
scale on Docker without this support.

Kubernetes also
provides namespaces
to isolate workloads on a cluster, secrets
management
and
auto-scaling
support. All these features an more mean that Kubernetes is also to
support large, diverse workloads in a way that Docker Swarm is just not
ready for at the moment.

Marathon

Basic Architecture

Another common orchestration setup for large scale clusters is to run
Marathon on top of Apache Mesos. Mesos is an open source cluster
management system that supports a diverse arrays of workloads. Mesos is
composed of a Mesos agent running on each host in the cluster which
reports its available resources to the master. There can be one or more
Mesos masters which coordinate using a Zookeeper cluster. At any given
time one of the masters nodes is active using a master election process.
The master can issue tasks to any of the Mesos agents, and will report
on the status of those tasks. Although you can issue tasks through the
API, the normal approach is to use a framework on top of Mesos. Marathon
is one such framework which provides support for running Docker
containers (as well as native Mesos containers).

Usability

Again compared to Swarm, Marathon has a fairly steep learning curve as
it does not share most of the concepts and terminology with Docker.
However, Marathon is not as feature rich, and is thus easier to learn
than Kubernetes. However, the complexity of managing a Marathon
deployment comes from the fact that it is layered on top of Mesos and
hence there are two layers of tools to manage. Furthermore, some of the
more advanced features of Marathon such as load balancing are only
available as additional frameworks that run on top of Marathon. Some
features such as authentication are only available if you run Marathon
on top of DC/OS, which in turns run on top of Mesos – adding yet another
layer of abstraction to the stack.

Feature Set

To define services in Marathon, you need to use its internal JSON
format as shown below. A simple definition like the one below will
create a service with two instances each running the nginx container.

{
  "id": "MyService"
  "instances": 2,
  "container": {
    "type": "DOCKER",
    "docker": {
      "network": "BRIDGE",
      "image": "nginx:latest"
    }
  }
}

A slightly more complete version of the above definition is shown below,
we now add port mappings and the health check. In port mapping, we
specify a container port, which is the port exposed by the docker
container. The host port defines which port on the public interface of
the host is mapped to the container port. If you specify 0 for host
port, then a random port is assigned at run-time. Similarly, we may
optionally specify a service port. The service port is used for service
discovery and load balancing as described later in this section. Using
the health check we can now do both rolling (default) and blue-green
deployments
.

{
  "id": "MyService"
  "instances": 2,
  "container": {
    "type": "DOCKER",
    "docker": {
      "network": "BRIDGE",
      "image": "nginx:latest"
      "portMappings": [
        { "containerPort": 8080, "hostPort": 0, "servicePort": 9000, "protocol": "tcp" },
      ]
    }
  },
  "healthChecks": [
    {
      "protocol": "HTTP",
      "portIndex": 0,
      "path": "/",
      "gracePeriodSeconds": 5,
      "intervalSeconds": 20,
      "maxConsecutiveFailures": 3
    }
  ]
}

[[In addition to single services, you can define Marathon Application
Groups, with a nested tree structure of services. The benefit of
defining application in groups is the ability to scale the entire group
together. This can be very useful in microservice stacks where tuning
individual services can be difficult. As of now, the scaling assumes
that all services will scale at the same rate so if you require ‘n’
instances of one service, you will get ‘n’ instances of all services.
] ]

{
  "id": "/product",
  "groups": [
    {
      "id": "/product/database",
      "apps": [
         { "id": "/product/mongo", ... },
         { "id": "/product/mysql", ... }
       ]
    },{
      "id": "/product/service",
      "dependencies": ["/product/database"],
      "apps": [
         { "id": "/product/rails-app", ... },
         { "id": "/product/play-app", ... }
      ]
    }
  ]
}

In addition to being able to define basic services, Marathon can also do
scheduling of containers based on specified constraints as detailed
here,
including specifying that each instance of the service must be on a
different physical host “constraints“: [[“hostname“,
“UNIQUE”]].
You can use the cpus and mem tags to specify the
resource utilization of that container. Each Mesos agent reports its
total resource availability hence the scheduler can place workloads on
hosts in an intelligent fashion.

By default, Mesos relies on the traditional Docker port mapping and
external service discover and load balancing mechanisms. However, recent
beta features add support for DNS based service discovery using Mesos
DNS
or Load balancing using
Marathon LB. Mesos DNS is
an application that runs on top of Mesos and queries the Mesos API for a
list of all running tasks and applications. It then creates DNS records
for nodes running those tasks. All Mesos agents then manually need to be
updated to use Mesos DNS service as its primary DNS server. Mesos DNS
uses the hostname or IP address used to register Mesos agents with the
master; and Port mappings can be queried as SRV records. Since Marathon
DNS works on agent hostnames, and there for the host network ports must
be exposed and hence must not collide. Mesos DNS does provide a way to
refer to individual containers persistently for stateful workloads such
as we would be able to using Kubernetes pet sets. In addition, unlike
Kubernetes VIPs which are addressable on any container in the cluster,
we must manually update /etc/resolve.conf to the set of Mesos DNS
servers and update the configuration if the DNS servers change.
Marathon-lb uses the Marathon Event bus to keep track of all service
launches and tear-downs. It then launches a HAProxy instance on agent
nodes to relay traffic to the requisite service node.

Marathon also has beta support for persistent
volumes
as
well as external persistent
volumes
.
However, both of these features are in a very raw state. Persistent
volumes are only persistent on a single node across container restarts,
volumes are deleted if the application using them is deleted however,
the actual data on disk is not deleted and must be removed manually.
External volumes require DC/OS and currently only allow your service to
scale to single instance.

Final Verdict

Today we have looked at three options for Docker container
orchestration: Docker Native (Swarm), Kubernetes and Mesos/Marathon. It
is difficult to pick a system to recommend because the best system is
highly dependent on your use case, scale and history. Furthermore, all
three systems are under heavy development and some of the features
covered are in beta and may be changed, removed or replaced very soon.

Docker Native gives you the quickest ramp-up with little to no vendor
lock-in beyond dependence on Docker. The dependence on Docker is not a
big issue, since it has become the defacto container standard. Given the
lack of a clear winner in the orchestration wars and the fact that
Docker native is the most flexible approach, it is a good choice for
simple web/stateless applications. However, Docker Native is very bare
bones at the moment and if you need to get complicated, larger-scale
applications to production you need to choose one of Mesos/Marathon or
Kubernetes.

Between Mesos/Marathon and Kubernetes is also not an easy choice as both
have their pros and cons. Kubernetes is certainly the more feature rich
and mature of the two, but it is also a very opinionated piece of
software. We think a lot of those opinions make sense, but Kubernetes
does not have the flexibility of Marathon. This makes sense when you
consider the rich history of non-Docker, non-containerized applications
that can run on Mesos in addition to Marathon (e.g. Hadoop clusters). If
you are doing a green field implementation and either don’t have strong
opinions about how to layout clusters, or your opinions agree with those
of Google, then Kubernetes is a better choice. Conversely, if you have
large, complicated legacy workloads that will gradually shift over to
containers then Mesos/Marathon is the way to go.

Another concern is scale: Kubernetes has been tested to thousands of
nodes, whereas Mesos has been tested to tens of thousands of nodes. If
you are launching clusters with tens of thousands of nodes, you’ll want
to use Mesos for the scalability of the underlying infrastructure – but
note that scaling advanced features such as load balancing to that range
will still be left to you. However, at that scale, few (if any)
off-the-shelf solutions work as advertised without careful tuning and
monkey patching.

Usman is a server and infrastructure engineer, with experience in
building large scale distributed services on top of various cloud
platforms. You can read more of his work at
techtraits.com, or follow him on twitter
@usman_ismailor
on GitHub.

You might also be interested in:

A Detailed Overview of Rancher’s Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

SUSE to showcase software-defined infrastructure @ SUSECON 16 w/ help from friends

Wednesday, 19 October, 2016

susecon-keynote-speakers-announcement

Today we announced our SUSECON 2016 sponsors, keynote speakers and breakout session details.  This year’s keynotes will take place with a little help from our friends at Fujitsu, Hewlett Packard Enterprise, IBM, Intel and SAP including Katsue Tanaka (Fujitsu Sr VP and Head of the Platform Software Business Unit), Scott Farrand VP of HPE Servers-Platform Software), Kathy Bennett (IBM VP of Global ISV Technical Enablement and Support) Figen Ülgen Ph.D. (Intel GM of High Performance Computing Platform Software and Cloud) and Brian Vink Sr. (SAP VP of Data Management Products).

In addition to sharing the stage with remarkable companies that are constantly driving innovation, we are pleased to announce SUSECON 2016 sponsors including cornerstone sponsors Fujitsu, Hewlett Packard Enterprise, IBM, Intel and SAP and platinum sponsors ARM, Cisco and Lenovo. Our Gold sponsors include DataVard, Micro Focus, Microsoft Azure, NetApp and Wipro.

Read the entire press release to learn session topics and more about the technology that will be showcased.

Rancher Labs Introduces Global Partner Network

Tuesday, 18 October, 2016

Consulting and reseller partner programs expand company’s global reach;
Service provider program helps partners deliver Containers-as-a-Service
and other Rancher-powered offerings
**Cupertino, Calif. – October 18,
2016 – **Rancher Labs, a provider of container
management software, today announced the launch of the Rancher Partner
Network, a comprehensive partner program designed to expand the
company’s global reach, increase enterprise adoption, and provide
partners and customers with tools for success. The program will support
consultancies and systems integrators, as well as resellers and service
providers worldwide, with initial partners from North and South America,
Europe, Asia and Australia. As the only container management platform to
ship with fully supported commercial distributions of Kubernetes, Docker
Swarm and Mesos, Rancher is unique in its ability to enable partners to
deliver container-based solutions using the customer’s choice of
orchestration tool. “Community interest in Rancher’s open and
easy-to-use container management platform has shattered expectations,
with over a million downloads and over ten million Rancher nodes
launched since this year alone,” said Shannon Williams, co-founder and
vice president of sales and marketing at Rancher Labs. “To help us meet
demand within the enterprise, we’re partnering with leading DevOps
consultancies, system integrators and service providers around the
world. We’re excited and humbled by the strong interest we’ve seen from
the partner community, and we’re looking forward to working with our
partners to help make containers a reality for our joint customers.”
The Rancher Partner Network The Rancher Partner Network provides
tools and support to meet the unique needs of each of type of partner.
The Network includes:

  • Consulting partners such as consultancies, system integrators (SIs),
    and agencies focused on helping customers successfully embrace
    digital transformation and rapidly deliver software using modern,
    open processes and technologies.
  • Resellers and OEMs that include Rancher in solutions they deliver to
    customers.
  • Managed services providers (MSPs) and cloud providers offering
    Rancher-based Containers-as-a-Service (CaaS) environments to
    end-users.
  • Application services providers (ASPs) delivering
    Software-as-a-Service (SaaS) and hosted applications on a
    Rancher-powered platform.

Partners benefit from a variety of sales, marketing, product, training
and support programs aimed at helping them ensure customer success while
capturing a greater share of the rapidly growing container marketplace.
Additionally, members of the service provider program can take exclusive
advantage of a unique pricing model specifically designed for and
exclusively available to that community. Prospective partners can learn
more about the program and apply by visiting
www.rancher.com/partners. Customers
can visit the same page to identify Rancher-authorized consultancies,
resellers and service providers in their area. Supporting Quotes “At
Apalia, we have extensive experience delivering software-defined
infrastructure and cloud solutions to a variety of enterprise customers
in France and Switzerland. As those customers began looking to take
advantage of containers, we needed a partner that supported the full
range of infrastructure we deliver, as well as emerging options in the
space. We’re thrilled to be partnering with Rancher to do so.” Pierre
Vacherand, CTO, Apalia
“As a container and cloud
company our clients have diverse levels of expertise and support
workloads utilizing Mesos, Kubernetes and Docker. With Rancher’s
ease-of-use and excellent support for multiple schedulers this
partnership was a natural fit for us.” Steven Borrelli, Founder & CEO,
Asteris
“Our business is delivering
mission-critical infrastructure and software solutions to government and
enterprise customers in Brazil. To do this, we partner with a variety of
IT industry leaders such as Oracle, IBM, Microsoft and Amazon Web
Services. Adding the capabilities of Rancher Labs complements all of
these and allows us, as a service provider, to easily support the
emerging container needs of these customers.” Hélvio Lima, CEO,
BRCloud
“Since 2001, Camptocamp has
established itself as a leading supporter of, and contributor to, open
source software. Our infrastructure solutions team uses open source to
deliver a full range of cloud migration, IT & DevOps automation, and
application deployment solutions to customers in Switzerland, France,
Germany and beyond. Rancher helps us deliver modern, containerized
applications across a wide range of cloud and on-premises
infrastructure, and easily works with the other open source products we
like.” Claude Philipona, Managing Partner,
Camptocamp
“Containers are an important
element of the emerging enterprise execution platform. Rancher’s
Application Catalog allows Data Essential customers to deploy custom
applications as well as big data and analytics software with a single
click, allowing their staff to get more done, more quickly. As one of
Rancher Labs’ first partners in Europe, the relationship has been
invaluable in helping us address this need.” Jonathan Basse, Founder,
Data Essential
“At Industrie IT, we
are committed to helping companies succeed with, and benefit from, top
technologies available today. Containers and DevOps have become a major
part of this, and we’re thrilled to be partnering with Rancher Labs to
enable customers to take advantage of the benefits.” Ameer Deen,
Director of On Demand Platforms, Industrie
IT
“We were quick to recognize the extremely
vibrant community that has formed around Rancher and its products,
having leaned on it for support during an early deployment. Establishing
ourselves as experts through active contributions in the community has
led to a number of opportunities for us in Europe and Asia. We’re
excited to take advantage of new ways to engage with Rancher through
this program.” Girish Shilamkar, Founder and CEO,
InfraCloud
“Nuxeo is a fast-growing company
offering an open source, hyperscale digital asset platform used by
customers like Electronic Arts, TBWA, and the U.S. Navy. Containers are
an important part of our cloud strategy, and we depend on our partner
Rancher to make them easy to use and manage. The Service Provider
program provides a flexible product with an equally flexible pricing
model and support, making it a perfect & future-proofed fit for our
cloud computing efforts.” Eric Barroca, CEO, Nuxeo
“Object Partners has been developing custom software solutions for our
clients since 1996. Our solutions enable our clients to leverage the
latest in language, framework, and cloud technologies to lower costs and
maximize profits. Rancher is helping us to bring the latest in container
technologies to our clients. Its intuitive, comprehensive, and
easy-to-manage platform is enabling our clients to create scalable,
highly available, and continuously deployed platforms for their
applications. Rancher’s new partner program will be a great resource for
us as we continue to grow our DevOps business.” John Engelman, Chief
Software Technologist, Object Partners

“The Persistent ‘Software 4.0’ vision is about helping enterprises in
healthcare, financial services and other industries put the people,
processes and tools in place in order to build software-driven
businesses and manage software-driven projects at speed. The container
technology that Rancher has developed is enabling DevOps teams in
realizing this vision.” Sudhir Kulkarni, President Digital at
Persistent Systems
“Our engineers and
consultants have come to love Rancher’s open source product, leading to
multiple successful customer deployments and happy customers. We’re
excited for the launch of Rancher’s formal partner program and looking
forward to continued success with their team.” Josh Lindenbaum, VP,
Business & Corporate Development, Redapt

“Treeptik is building upon an extensive history of delivering cloud and
Java/JEE-based solutions for European enterprises, helping customers
transform all aspects of the software development process. We were early
to recognize the significance of containers, and our team has been early
pioneers of using Docker, Mesos and Kubernetes. We’re big fans of
Rancher because it makes this easier than any other tool out there, and
we’re excited to be a part of the company’s partner program.” Fabien
Amico, Founder & CTO, Treeptik

Supporting ResourcesIntroducing the Rancher Partner
Network

Partner Network Program page
Partner Network directory
Company Blog
Twitter

About Rancher Labs Rancher Labs builds innovative, open source
container management software for enterprises leveraging containers to
accelerate software development and improve IT operations. With
commercially-supported distributions of Kubernetes, Mesos, and Docker
Swarm, our flagship Rancher platform allows users to manage all aspects
of running containers in development and production environments. For
additional information, please visit
www.rancher.com. Contact Eleni Laughlin
Mindshare PR

Tags: , Category: Rancher Blog Comments closed

Cloud is on the Move: What's Driving Private Cloud Adoption?

Tuesday, 18 October, 2016

I’ve worked in technology marketing for some time. I’d like to say how long but honestly it’s only going to date me and no one likes to talk about age!

I think the one thing that is really fascinating is the move towards cloud.  10 years ago we would not have been considering this new technology – particularly an Openstack version that didn’t even exist that long ago.  In fact it was only in 2010 as a joint project between Rackspace Hosting and NASA that the project* was even developed. Now open source technology and Openstack is very much a fundamental part of mainstream enterprise computing environments.  Recent independent research by SUSE found that around 90% of large companies have now implemented a private cloud and that 81% are using or plan to use OpenStack

So what’s driving private cloud adoption?

driving-private-cloud-adoption

A document written by Mark Smith, SUSE’s expert on the cloud, outlines 5 business reasons that companies are making the move and enjoying the benefits of private cloud… costs savings and agility are but two of these benefits. I’ll leave you to read the rest.

So why is Openstack such a remarkable choice for your private cloud solution?

Most of us have technology solutions that have grown up over a few decades at least, unless you are a new technology that has been born of this age like Netflix.  Most systems are therefore a jumble of different technologies, infrastructures and platforms.  It’s also something that you’ve heavily invested in, so moving directly to a brand new datacenter is just not economically viable.  So you’re stuck in the drive to be more competitive and for IT to provide increased agility, whilst keeping costs low by utilizing your existing systems.

So why does Openstack offer you a good choice for Private cloud?  Mainly because being non-proprietary, it’s designed to work with a variety of hypervisors, to utilize existing hardware and software, which in turn makes it easier for you to transform your existing datacenter and maximize your investment in your existing technology.    Plus being open source it gives you choice and flexibility.

As you and I have already seen from the past few decades, the challenges of IT are not going away and the demands just become stronger.  So I’d recommend that you consider reading the article on why Openstack should be on your shortlist.  It might help you in choosing your private cloud solution.

*Wikipedia