Rancher and Spotinst partner to introduce a new model for utilizing Docker orchestration tools on spot instances

Monday, 16 November, 2015

spotinstlogo](https://cdn.rancher.com/wp-content/uploads/2015/11/16025649/spotinstlogo.png)
We
are very excited to announce a new partnership
with Spotinst today to deliver intelligent
management and migration of container workloads running on spot
instances. With this new solution, we have developed a simple, intuitive
way for using spot instances to run any
container workload reliably and for a fraction of the cost of
traditional applications. Since the dawn
of data centers we’ve seen continuous improvements in utilization and
cost efficiency. But like [Jevons’
Paradox]](https://en.wikipedia.org/wiki/Jevons_paradox),
the more efficient we become in consuming
a resource, the more of that resource we consume
. So we always are
seeking the newest, fastest and uber-optimized of everything.

How it works:

Spotinst is a SaaS platform that enables reliable, highly available use
of AWS Spot Instances and Google Preemptible VMs with typical savings of
70-90%.

We’ve worked with the team at Spotinst to integrate with the Rancher API directly. The integration
utlizes Docker “checkpoint and resume” (CRIU project). Based on metrics
and business rules provided by Spotinst, Rancher can freeze any
container and resume it on any other instance, automating the process a
typical DevOps team might implement to manage container
deployment.
rancher-spotinst-1](https://www.suse.com/c/wp-content/uploads/2021/09/rancher_blog_rancher-spotinst-1.png)
For example, if Spotinst identifies that the spot instance a
container is running on, is about to terminate (with a 4 – 7 minute
heads-up), Spotinst will instruct Rancher to pause that container and
relocate it to another relevant instance.
rancher-spotinst-2](https://www.suse.com/c/wp-content/uploads/2021/09/rancher_blog_rancher-spotinst-2.png)

Unprecedented Availability for Online Gaming While pizza servers,
blade racks and eventually virtualization technology paved the way for
modern data centers, today’s cloud computing customer expects
increasingly higher performance and higher availability in everything
from online gaming to genome sequencing. An
awesome example of how Docker is utilizing live migration to deliver
high availability can be seen in this
presentation from
DockerCon earlier this yeaar.

The presenters show how they containerized Quake, had it running on a
DigitalOcean server in Singapore, and then live migrated it to Amsterdam
with the player experiencing practically zero interruption to his game.
Using “checkpoint and resume”, they
didn’t just stop the container, but
took an entire running process with all its memory, file descriptors,
etc. and effortlessly moved it and resumed it halfway around the
world.

How it works?

rancher-spotinst-4](https://www.suse.com/c/wp-content/uploads/2021/09/rancher_blog_rancher-spotinst-4.png)

We’re really excited about the potential of live migrating containers,
and this partnership with Spotinst. By moving workloads to spot
instances, organizations can dramatically reduce the cost of cloud
resources.

To try out the new service, you can sign up for a Spotinst account and directly connect it to your
running Rancher deployment, via your API keys.

To learn more, please request a demonstration from one of our engineers.

Tags: ,,, Category: Uncategorized Comments closed

Introducing Hyper-Converged Infrastructure for Containers, Powered by Rancher and Redapt

Wednesday, 11 November, 2015

converged
nodesHyper-Converged
Infrastructure is one of the greatest innovations in the modern data
center. I have been a big fan ever since I heard the analogy “iPhone
for the data center

from Nutanix, the company who invented hyper-converged infrastructure.
In my previous roles as CEO of Cloud.com, creator of CloudStack, and CTO
of Citrix’s CloudPlatform Group, I helped many organizations transform
their data centers into infrastructure clouds. The biggest challenge was
always how to integrate a variety of technologies from multiple vendors
into a coherent and reliable cloud platform. Hyper-converged
infrastructure is an elegant solution to this complexity, that makes
infrastructure consumable by offering a simple turn-key experience.
Hyper-convergence hides the underlying complexity and makes the lives of
data center operators much better. Typically, hyper-converged
infrastructure is used to run virtual machines (VMs), the popular
workload running in data centers today. The nature of data center
workloads, however, are changing. In the last year, Docker containers
have become a significant type of workloads in data centers. Because of
this, we are beginning to see market demand for purpose-built and
optimized infrastructure solution for containers. Today, our team at
Rancher announced a hyper-converged infrastructure platform for
containers
,
powered by Rancher and Redapt. This is a turn-key solution to stand up a
complete container service platform in the data center. Organizations no
longer need to source hardware, deploy virtualization and cloud
platforms, and integrate separate container orchestration systems.

Support for both VMs and Container Infrastructure

We designed the solution to support both VMs and containers, following
the approach used by Google to run virtual machines in
containers
.
We have experimented with this approach in our
RancherVM
project since April and have received a lot of positive feedback from
users. A benefit of running VMs inside containers is the ability to
leverage the same tools to manage both VMs and containers. Because VMs
and containers in fact behave in similar ways, the Rancher CLI and UI we
have developed for Docker containers seamlessly applies to VMs. We use
RancherOS as the base operating system
for the converged infrastructure platform. The RancherOS kernel has
builtin support for KVM. The following figure depicts how Rancher and
RancherOS work together to form the complete software stack for our
hyper-converged infrastructure solution.
Rancher Converged Infrastructure

Containerized Storage Services

All hyper-converged infrastructure solutions include a distributed
storage implementation. By leveraging our other major announcement
today, Persistent Storage
Services
,
this hyper-converged infrastructure solution has the unique ability to
use multiple distributed storage implementations. Users have the freedom
to deploy the software storage platform that suits the needs of their
applications. This approach reduces failure domain and improve
reliability. Failure of a distributed storage deployment can only impact
the application that consumes that storage. Users can deploy open source
and commercial storage software, as long as the storage software is
packaged as Docker containers. We are incorporating Gluster and
NexentaEdge into our hyper-converged infrastructure platform, and plan
to support additional storage products in the future. converged
infrastructure for
containers

Access to the Docker Image Ecosystem

Successful hyper-converged infrastructure solutions often target popular
application workload, such as databases or virtual desktops. The Docker
ecosystem offers a rich set of applications that can run on the Rancher
hyper-converged infrastructure solution. DockerHub alone, for example,
contains hundreds of thousands of Docker images. In addition, Rancher
makes it easy to run not just single containers, but large application
clusters orchestrated by new container frameworks such as Compose,
Swarm, and Kubernetes. Rancher Labs has certified and packaged a set of
popular DevOps tools. With a single click, users can deploy, for
example, an entire ELK cluster on the hyper-converged infrastructure.
catalog

Our Partnership with Redapt

redaptlogoWe
have known and worked with Redapt team for many years. Back in 2011, my
team at Cloud.com collaborated with Redapt to build one of the largest
CloudStack-powered private clouds at the time, consisting of over 40,000
physical servers. We were deeply impressed by the technical skills, the
ability to innovate, and the professionalism of the Redapt team.
Creating a hyper-converged infrastructure solution requires close
collaboration between the hardware and software vendors. We are
fortunate to be able to work with Redapt again to bring to market the
industry’s first hyper-converged infrastructure for containers solution.

Availability

Rancher and Redapt are working with early access customers now. We plan
to make the hyper-converged infrastructure solution generally available
in first half of 2016. Please request a demo if you would
like to speak with one of our engineers about converged infrastructure,
or register for our next online meetup, where we will be demonstrating
this new functionality. Sheng Liang is the
CEO and co-founder of Rancher Labs. You can follow him on Twitter at
@shengliang.

Tags: , Category: Uncategorized Comments closed

Deploying a scalable Jenkins cluster with Docker and Rancher

Thursday, 5 November, 2015

Build a CI/CD Pipeline with Kubernetes and Rancher
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

Containerization brings several benefits to traditional CI platforms
where builds share hosts: build dependencies can be isolated,
applications can be tested against multiple environments (testing a Java
app against multiple versions of JVM), on-demand build environments can
be created with minimal stickiness to ensure test fidelity, Docker
Compose can be used to quickly bring up environments which mirror
development environments. Lastly, the inherent isolation offered by
Docker Compose-based stacks allow for concurrent builds — a sticking
point for traditional build environments with shared components.

One of the immediate benefits of containerization for CI is that we can
leverage tools such as Rancher to manage distributed build environments
across multiple hosts. In this article, we’re going to launch a
distributed Jenkins cluster with Rancher Compose. This work builds upon
the earlier
work
** **by
one of the authors, and further streamlines the process of spinning up
and scaling a Jenkins stack.

Our Jenkins Stack

jenkins_master_slave
For our stack, we’re using Docker in Docker (DIND) images for
Jenkins master** and slave **running
on top of Rancher compute nodes launched in Amazon EC2. With DIND, each
Jenkins container runs a Docker daemon within itself. This allows us to
create build pipelines for dockerized applications with Jenkins.

Prerequisites

  • [AWS EC2
    account]
  • [IAM credentials for docker
    machine]
  • [Rancher
    Server v0.32.0+]
  • [Docker 1.7.1+]
  • [Rancher
    Compose]
  • [Docker
    Compose]

Setting up Rancher

Step 1: Setup an EC2 host for Rancher server

First thing first, we need an EC2 instance to run the Rancher server. We
recommend going with Ubuntu 14.04
AMI
for
it’s up-to-date kernel. Make sure[ to configure the security group for
the EC2 instance with access to port 22 (SSH) and 8080 (rancher web
interface):]

launch_ec2_instance_for_rancher_step_2

[Once the instance starts, the first order of business is
to ][install the
latest version of Docker by following the steps below (for Ubuntu
14.04):]

  1. [sudo apt-get
    update]
  2. [curl -sSL https://get.docker.com/ | sh (requires sudo
    password)]
  3. [sudo usermod -aG docker
    ubuntu]
  4. [Log out and log back in to the
    instance]

At this point you should be able to run docker without sudo.

[Step 2: Run and configure Rancher]

[To install and run the latest version of Rancher (v0.32.0 at the time
of writing), follow the instructions in the docs.
In a few minutes your Rancher server should be up and ready to serve
requests on port
8080. ][If you
browse to http://YOUR_EC2_PUBLIC_IP:8080/ you will be greeted with a
welcome page and a notice asking you to configure
access. ][This is
an important step to prevent unauthorized access to your Rancher server.
Head over to the settings section and follow the instructions to
configure access
control. ]

rancher_setup_step_1

[We typically create a separate environment for hosting all developer
facing tools, e.g., Jenkins, Seyren, Graphite etc to isolate them from
the public facing live services. To this end, we’re going to create an
environment called *Tools. *From the environments menu (top left),
select “manage environments” and create a new environment. Since
we’re going to be working in this environment exclusively, let’s go
ahead and make this our default environment by selecting “set as
default login environment”
from the environments
menu. ]

rancher_setup_step_2_add_tools_env

The next step is to tell Rancher about our hosts. For this tutorial,
we’ll launch all hosts with Ubuntu 14.04. Alternatively, you can add an
existing host using the custom host** **option
in Rancher. Just make sure that your hosts are running Docker 1.7.1+.

rancher_setup_step_3_add_ec2_host

One of the hosts (JENKINS_MASTER_HOST) is going to run Jenkins master
and would need some additional configuration. First, we need to open up
access to port 8080 (default Jenkins port). You can do that by updating
the security group used by that instance fom the AWS console. In our
case, we updated the security group ( “rancher-machine” ) which was
created by rancher. Second, we need to attach an additional EBS-backed
volume to host Jenkins configuration. Make sure that you allocate enough
space for the volume, based on how large your build workspaces tend to
get. In addition, make sure the flag “delete on termination” is
unchecked. That way, the volume can be re-attached to another instance
and backed up easily:

[![launch_ec2_ebs_volume_for_jenkins](https://cdn.rancher.com/wp-content/uploads/2015/08/01132712/launch_ec2_ebs_volume_for_jenkins.png)](https://cdn.rancher.com/wp-content/uploads/2015/08/01132712/launch_ec2_ebs_volume_for_jenkins.png)

Lastly, let’s add a couple of labels for the JENKINS_MASTER_HOST; 1)
add a label called “profile” with the value as “jenkins” and 2) add
a label called “jenkins-master” with the value “true“. We’re going
to use these labels later to schedule master and slave containers on
our hosts.

Step 3: Download and install rancher-compose CLI

As a last step, we need to install the rancher-compose CLI on our
development machine. To do that, head over to the applications tab in
Rancher and download the rancher compose CLI for your system. All you
need is to add the path-to-your-rancher-compose-CLI to
your *PATH *environment variable.

rancher_setup_step_5_install_rancher_compose

With that, our rancher server is ready and we can now launch and manage
containers with it.

Launching Jenkins stack with Rancher

Step 1: Stack configuration

Before we launch the Jenkins stack, we need to create a new Rancher API
key from API & Keys section under settings. Save the API key pair
some place safe as we’re going to need it with rancher-compose. For the
rest of the article, we refer to the API key pair as [RANCHR_API_KEY
and RANCHER_API_KEY_SECRET]. Next, open up a
terminal to fetch the latest version of Docker and Rancher Compose
templates from Github:

git clone https://github.com/rancher/jenkins-rancher.git
cd jenkins-rancher

Before we can use these templates, let’s quickly update the
configuration. First, open up the Docker Compose file and update the
Jenkins username and password to a username and password of your choice.
Let’s call these credentials JENKINS_USER and JENKINS_PASSWORD.
These credentials will be used by the Jenkins slave to talk to master.
Second, update the host tag for slave and master to match the tags you
specified for your rancher compute hosts. Make sure that the
io.rancher.scheduler.affinity:host_label has a value of
“profile=jenkins” for jenkins-slave. Similarly, for
jenkins-master, make sure that the value
for io.rancher.scheduler.affinity:host_label is
“jenkins-master=true“. This will ensure that rancher containers are
only launched on the hosts that you want to limit them to. For example,
we are limiting our Jenkins master to only run on a host with an
attached EBS volume and access to port 8080.

jenkins-slave:
  environment:
    JENKINS_USERNAME: jenkins
    JENKINS_PASSWORD: jenkins
    JENKINS_MASTER: http://jenkins-master:8080
  labels:
    io.rancher.scheduler.affinity:host_label: profile=jenkins
  tty: true
  image: techtraits/jenkins-slave
  links:
  - jenkins-master:jenkins-master
  privileged: true
  volumes:
  - /var/jenkins
  stdin_open: true
jenkins-master:
  restart: 'no'
  labels:
    io.rancher.scheduler.affinity:host_label: jenkins-master=true
  tty: true
  image: techtraits/jenkins-master
  privileged: true
  stdin_open: true
  volume_driver: /var/jenkins_home
jenkins-lb:
  ports:
  - '8080'
  tty: true
  image: rancher/load-balancer-service
  links:
  - jenkins-master:jenkins-master
  stdin_open: true

Step 2: Create the Jenkins stack with Rancher compose

[Now we’re all set to launch the Jenkins stack. Open up a terminal,
navigate to the “jenkins-rancher” directory and type:
]

rancher-compose --url http://RANCHER_HOST:RANCHER_PORT/v1/ --access-key RANCHER_API_KEY --secret-key RANCHER_API_KEY_SECRET --project-name jenkins --verbose create

[The output of the rancher compose command should look something
like:]

[DEBU[0000] Opening compose file:
docker-compose.yml]
[ DEBU[0000]
Opening rancher-compose file:
/home/mbsheikh/jenkins-rancher/rancher-compose.yml]

[ DEBU[0000] [0/3] [jenkins-slave]:
Adding]
[ DEBU[0000] Found
environment: jenkins(1e9)]
[
DEBU[0000] Launching action for
jenkins-master]
[ DEBU[0000]
Launching action for jenkins-slave]
[
DEBU[0000] Launching action for
jenkins-lb]
[ DEBU[0000] Project
[jenkins]: Creating project]
[
DEBU[0000] Finding service
jenkins-master]
[ DEBU[0000] [0/3]
[jenkins-master]: Creating]
[
DEBU[0000] Found service jenkins-master]

[ DEBU[0000] [0/3] [jenkins-master]:
Created]
[ DEBU[0000] Finding service
jenkins-slave]
[ DEBU[0000] Finding
service jenkins-lb]
[ DEBU[0000]
[0/3] [jenkins-slave]: Creating]
[
DEBU[0000] Found service jenkins-slave]

[ DEBU[0000] [0/3] [jenkins-slave]:
Created]
[ DEBU[0000] Found service
jenkins-lb]
[ DEBU[0000] [0/3]
[jenkins-lb]: Created]

Next, verify that we have a new stack with three services:

rancher_compose_2_jenkins_stack_created

Before we start the stack, let’s make sure that the services are
properly linked. Go to your stack’s settings and select “View Graph”
which should display the links between various services:

rancher_compose_3_jenkins_stack_graph

Step 3: Start the Jenkins stack with Rancher compose

To start the stack and all of Jenkins services, we have a couple of
options; 1) select “Start Services” option from Rancher UI, or 2)
invoke rancher-compose CLI with the following command:

rancher-compose --url http://RANCHER_HOST:RANCHER_PORT/v1/ --access-key RANCHER_API_KEY --secret-key RANCHER_API_KEY_SECRET --project-name jenkins --verbose start

Once everything is running, find out the public IP of the host running
“jenkins-lb” from the Rancher UI and browse
to http://HOST_IP_OF_JENKINS_LB:8080/. If everything is configured
correctly, you should see the Jenkins landing page. At this point, both
your Jenkins master and slave(s) should be running; however, if you
check the logs for your Jenkins slave, you would see 404 errors where
the Jenkins slave is unable to connect to the Jenkins master. We need to
configure Jenkins to allow for slave connections.

Configuring and Testing Jenkins

In this section, we’ll go through the steps needed to configure and
secure our Jenkins stack. First, let’s create a Jenkins user with the
same credentials (JENKINS_USER and JENKINS_PASSWORD) that you
specified in your docker compose configuratio[n
file. ]Next, to enable security for Jenkins,
navigate to “manage Jenkins” and select “enable security” from the
security configuration. Make sure to specify 5000 as a fixed port for
“TCP port for JNLP slave agents“. Jenkins slaves communicate with the
master node on this port.

setup_jenkins_1_security

For the Jenkins slave to be able to connect to the master, we first need
to install the Swarm
plugin
. The
plugin can be installed from the “manage plugins” section in Jenkins.
Once you have the swarm plugin installed, your Jenkins slave should show
up in the “Build Executor Status” tab:

setup_jenkins_2_slave_shows_up

Finally, to complete the master-slave configuration, head over to
“manage Jenkins“. You should now see a notice about enabling master
security subsystem. Go ahead and enable the subsystem; it can be used to
control access between master and slaves:

setup_jenkins_3_master_slave_security_subsystem

Before moving on, let’s configure Jenkins to work with Git and Java
based projects. To configure git, simply install the git plugin. Then,
select “Configure” from “Manage Jenkins” settings and set up the JDK
and maven installers you want to use for your projects:

[setup_jenkins_4_jdk_7
]

setup_jenkins_5_maven_3

The steps above should be sufficient for building docker or maven based
Java projects. To test our new Jenkins stack, let’s create a docker
based job. Create a new “Freestyle Project” type job named
“docker-test” and add the following build step and select “execute
shell” with the following commands:

docker -v
docker run ubuntu /bin/echo hello world
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -q)

Save the job and run. In the console output, you should see the version
of docker running inside your Jenkins container and the output for other
docker commands in our job.

Note: The stop, rm and rmi commands used in the above shell script
stops and cleans up all containers and images. Each Jenkins job should
only touch it’s own containers, and therefore, we recommend deleting
this job after a successful test.

Scaling Jenkins with Rancher

This is an area where Rancher really shines; it makes managing and
scaling Docker containers trivially easy. In this section we’ll show
you how to scale up and scale down the number of Jenkins slaves based on
your needs.

In our initial setup, we only had one EC2 host registered with Rancher
and all three services (Jenkins load balancer, Jenkins master and
Jenkins slave) running on the same host. It looks like:

rancher_one_host

We’re now going to register another host by following the instructions:

rancher_setup_step_4_hosts

jenkins_scale_upTo launch more
Jenkins slaves, simply click “Scale up” from your “Jenkins” stack in
Rancher. That’s it! Rancher will immediately launch a new Jenkins slave
container. As soon as the slave container starts, it will connect with
Jenkins master and will show up in the list of build hosts:

jenkins_scale_up_2

To scale down, select “edit” from jenkins-slave settings and adjust
the number of slaves to your liking:

jenkins_scale_down

In a few seconds you’ll see the change reflected in Jenkins list of
available build hosts. Behind the scenes, Rancher uses labels to
schedule containers on hosts. For more details on Rancher’s container
scheduling, we encourage you to check out the documentation.

Conclusion

In this article, we built Jenkins with Docker and Rancher. We deployed
up a multi-node Jenkins platform with Rancher Compose which can be
launched with a couple of commands and scaled as needed. Rancher’s
cross-node networking allows us to seamlessly scale the Jenkins cluster
on multiple nodes and potentially across multiple clouds with just a few
clicks. Another significant aspect of our Jenkins stack is the DIND
containers for Jenkins master and slave, which allows the Jenkins setup
to be readily used for dockerized and non dockerized applications.

In future articles, we’re going to use this Jenkins stack to create
build pipelines and highlight CI best practices for dockerized
applications. To learn more about managing applications through the
upgrade process, please join our next online meetup where we’ll dive
into the details of how to manage deployments and upgrades of
microservices with Docker and Rancher.

Bilal and Usman are server and infrastructure engineers, with
experience in building large scale distributed services on top of
various cloud platforms. You can read more of their work at
techtraits.com, or follow them on twitter
@mbsheikh and
@usman_ismail respectively.

Build a CI/CD Pipeline with Kubernetes and Rancher
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

Tags: ,, Category: Uncategorized Comments closed