Continuous Delivery of Everything with Rancher, Drone, and Terraform

Wednesday, 16 August, 2017

It’s 8:00 PM. I just deployed to production, but nothing’s working.
Oh, wait. the production Kinesis stream doesn’t exist, because the
CloudFormation template for production wasn’t updated.
Okay, fix that.
9:00 PM. Redeploy. Still broken. Oh, wait. The production config file
wasn’t updated to use the new database.
Okay, fix that. Finally, it
works, and it’s time to go home. Ever been there? How about the late
night when your provisioning scripts work for updating existing servers,
but not for creating a brand new environment? Or, a manual deployment
step missing from a task list? Or, a config file pointing to a resource
from another environment? Each of these problems stems from separating
the activity of provisioning infrastructure from that of deploying
software, whether by choice, or limitation of tools. The impact of
deploying should be to allow customers to benefit from added value or
validate a business hypothesis. In order to accomplish this,
infrastructure and software are both needed, and they normally change
together. Thus, a deployment can be defined as:

  • reconciling the infrastructure needed with the infrastructure that
    already exists; and
  • reconciling the software that we want to run with the software that
    is already running.

With Rancher, Terraform, and Drone, you can build continuous delivery
tools that let you deploy this way. Let’s look at a sample system:
This simple
architecture has a server running two microservices,
[happy-service]
and
[glad-service].
When a deployment is triggered, you want the ecosystem to match this
picture, regardless of what its current state is. Terraform is a tool
that allows you to predictably create and change infrastructure and
software. You describe individual resources, like servers and Rancher
stacks, and it will create a plan to make the world match the resources
you describe. Let’s create a Terraform configuration that creates a
Rancher environment for our production deployment:

provider "rancher" {
  api_url = "${var.rancher_url}"
}

resource "rancher_environment" "production" {
  name = "production"
  description = "Production environment"
  orchestration = "cattle"
}

resource "rancher_registration_token" "production_token" {
  environment_id = "${rancher_environment.production.id}"
  name = "production-token"
  description = "Host registration token for Production environment"
}

Terraform has the ability to preview what it’ll do before applying
changes. Let’s run terraform plan.

+ rancher_environment.production
    description:   "Production environment"
    ...

+ rancher_registration_token.production_token
    command:          "<computed>"
    ...

The pluses and green text indicate that the resource needs to be
created. Terraform knows that these resources haven’t been created yet,
so it will try to create them. Running terraform apply creates the
environment in Rancher. You can log into Rancher to see it. Now let’s
add an AWS EC2 server to the environment:

# A look up for rancheros_ami by region
variable "rancheros_amis" {
  default = {
      "ap-south-1" = "ami-3576085a"
      "eu-west-2" = "ami-4806102c"
      "eu-west-1" = "ami-64b2a802"
      "ap-northeast-2" = "ami-9d03dcf3"
      "ap-northeast-1" = "ami-8bb1a7ec"
      "sa-east-1" = "ami-ae1b71c2"
      "ca-central-1" = "ami-4fa7182b"
      "ap-southeast-1" = "ami-4f921c2c"
      "ap-southeast-2" = "ami-d64c5fb5"
      "eu-central-1" = "ami-8c52f4e3"
      "us-east-1" = "ami-067c4a10"
      "us-east-2" = "ami-b74b6ad2"
      "us-west-1" = "ami-04351964"
      "us-west-2" = "ami-bed0c7c7"
  }
  type = "map"
}


# this creates a cloud-init script that registers the server
# as a rancher agent when it starts up
resource "template_file" "user_data" {
  template = <<EOF
#cloud-config
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      for i in {1..60}
      do
      docker info && break
      sleep 1
      done
      sudo docker run -d  --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.1 $${registration_url}
EOF

  vars {
    registration_url = "${rancher_registration_token.production_token.registration_url}"
  }
}

# AWS ec2 launch configuration for a production rancher agent
resource "aws_launch_configuration" "launch_configuration" {
  provider = "aws"
  name = "rancher agent"
  image_id = "${lookup(var.rancheros_amis, var.terraform_user_region)}"
  instance_type = "t2.micro"
  key_name = "${var.key_name}"
  user_data = "${template_file.user_data.rendered}"

  security_groups = [ "${var.security_group_id}"]
  associate_public_ip_address = true
}


# Creates an autoscaling group of 1 server that will be a rancher agent
resource "aws_autoscaling_group" "autoscaling" {
  availability_zones        = ["${var.availability_zones}"]
  name                      = "Production servers"
  max_size                  = "1"
  min_size                  = "1"
  health_check_grace_period = 3600
  health_check_type         = "ELB"
  desired_capacity          = "1"
  force_delete              = true
  launch_configuration      = "${aws_launch_configuration.launch_configuration.name}"
  vpc_zone_identifier       = ["${var.subnets}"]
}

We’ll put these in the same directory as environment.tf, and run
terraform plan again:

+ aws_autoscaling_group.autoscaling
    arn:                            ""
    ...

+ aws_launch_configuration.launch_configuration
    associate_public_ip_address: "true"
    ...

+ template_file.user_data
    ...

This time, you’ll see that rancher_environment resources is missing.
That’s because it’s already created, and Rancher knows that it
doesn’t have to create it again. Run terraform apply, and after a few
minutes, you should see a server show up in Rancher. Finally, we want to
deploy the happy-service and glad-service onto this server:

resource "rancher_stack" "happy" {
  name = "happy"
  description = "A service that's always happy"
  start_on_create = true
  environment_id = "${rancher_environment.production.id}"

  docker_compose = <<EOF
    version: '2'
    services:
      happy:
        image: peloton/happy-service
        stdin_open: true
        tty: true
        ports:
            - 8000:80/tcp
        labels:
            io.rancher.container.pull_image: always
            io.rancher.scheduler.global: 'true'
            started: $STARTED
EOF

  rancher_compose = <<EOF
    version: '2'
    services:
      happy:
        start_on_create: true
EOF

  finish_upgrade = true
  environment {
    STARTED = "${timestamp()}"
  }
}

resource "rancher_stack" "glad" {
  name = "glad"
  description = "A service that's always glad"
  start_on_create = true
  environment_id = "${rancher_environment.production.id}"

  docker_compose = <<EOF
    version: '2'
    services:
      glad:
        image: peloton/glad-service
        stdin_open: true
        tty: true
        ports:
            - 8000:80/tcp
        labels:
            io.rancher.container.pull_image: always
            io.rancher.scheduler.global: 'true'
            started: $STARTED
EOF

  rancher_compose = <<EOF
    version: '2'
    services:
      glad:
        start_on_create: true
EOF

  finish_upgrade = true
  environment {
    STARTED = "${timestamp()}"
  }
}

This will create two new Rancher stacks; one for the happy service and
one for the glad service. Running terraform plan once more will show
the two Rancher stacks:

+ rancher_stack.glad
    description:              "A service that's always glad"
    ...

+ rancher_stack.happy
    description:              "A service that's always happy"
    ...

And running terraform apply will create them. Once this is done,
you’ll have your two microservices deployed onto a host automatically
on Rancher. You can hit your host on port 8000 or on port 8001 to see
the response from the services:
We’ve created each
piece of the infrastructure along the way in a piecemeal fashion. But
Terraform can easily do everything from scratch, too. Try issuing a
terraform destroy, followed by terraform apply, and the entire
system will be recreated. This is what makes deploying with Terraform
and Rancher so powerful – Terraform will reconcile the desired
infrastructure with the existing infrastructure, whether those resources
exist, don’t exist, or require modification. Using Terraform and
Rancher, you can now create the infrastructure and the software that
runs on the infrastructure together. They can be changed and versioned
together, too. In the future blog entries, we’ll look at how to
automate this process on git push with Drone. Be sure to check out the
code for the Terraform configuration are hosted on
[github].
The
[happy-service]
and
[glad-service]
are simple nginx docker containers. Bryce Covert is an engineer at
pelotech. By day, he helps teams accelerate
engineering by teaching them functional programming, stateless
microservices, and immutable infrastructure. By night, he hacks away,
creating point and click adventure games. You can find pelotech on
Twitter at @pelotechnology.

Tags: , Category: Uncategorized Comments closed

Joining as VP of Business Development

Monday, 19 June, 2017

Nick Stinemates, VP Business DevelopmentI am incredibly excited to be
joining such a talented, diverse group at Rancher Labs as Vice President
of Business Development. In this role, I’ll be building upon my
experience of developing foundational and strategic relationships based
on open source technology. This change is motivated by my desire to go
back to my roots, working with small, promising companies with
passionate teams. I joined Docker, Inc. in 2013, just as it started to
bring containers out of the shadows and empower developers to write
software with the tools of their choice, while redefining their
relationship with infrastructure. Now that Docker is available in every
cloud environment, embedded in developer tools, and integrated in
development pipelines, the focus has shifted to making it more efficient
and sustainable for business. As users look for more integrated
solutions, the complexity of interrelated services and software rises
dramatically, giving an advantage to vendors that are proactively
reaching out and collaborating with best of breed tools. This is, I
believe, one of Rancher Labs’ strengths.

The Rancher container management
platform implements a layer of infrastructure services and drivers
designed specifically to power containerized applications. Since
networking, storage, load balancer, DNS, and security services are
deployed as containers, Rancher is in a unique position to integrate
technology efficiently, holistically, and at scale. Similarly, Rancher
also makes ISV and open source applications available via
its application catalog. The public
catalog delivers more than 90 popular applications and development
tools, many of which are contributed by the Rancher community. In
addition to further developing the Rancher ecosystem via technology and
ISV partnerships, I will be working to expand the Rancher Labs Partner
Network
. We will be building a
comprehensive partner program designed to expand the company’s global
reach, increase enterprise adoption, and provide partners and customers
with tools for success. From what I can tell after my first week, I am
in the right place. I’m looking forward to becoming part of the Rancher
Labs family, and collaborating with the broader ecosystem while
developing new relationships. As for immediate plans, I am coming up to
speed as fast as I can, and spending as much time talking to as many
people in the ecosystem as possible. If you’d like to explore
opportunities to collaborate, please consider becoming a
partner
. Nick is the
Vice President of Business Development at Rancher Labs where he is
focused on defining and executing Partner strategy. Prior to joining
Rancher Labs, Nick was the Vice President of Business Development and
Technical Alliances at Docker for four years. At Docker, Nick was
responsible for creating and driving the overall partner engagement and
strategy, as well as cultivating many company-defining strategic
alliances. Nick has over 15 years’ experience participating in and
contributing to the open source ecosystem as well as 10 years in
management functions in the enterprise financial space.

Tags: , Category: Uncategorized Comments closed

Unlocking the Business Value of Docker

Tuesday, 25 April, 2017

Why Smart Container Management is Key

For anyone working in IT, the excitement around containers has been hard
to miss. According to RightScale, enterprise deployments of Docker over
doubled in 2016 with 29% of organizations using the software versus just
14% in 2015 [1]. Even more impressive, fully 67%
of organizations surveyed are either using Docker or plan to adopt it.
While many of these efforts are early stage, separate research shows
that over two thirds of organizations who try Docker report that it
meets or exceeds expectations [2], and the
average Docker deployment quintuples in size in just nine months.

Clearly, Docker is here to stay. While exciting, containers are hardly
new. They’ve existed in various forms for years. Some examples include
BSD jails, Solaris Zones, and more modern incarnations like Linux
Containers (LXC). What makes Docker (based on LXC) interesting is that
it provides the tooling necessary for users to easily package
applications along with their dependencies in a format readily portable
between environments. In other words, Docker has made containers
practical and easy to use.

Re-thinking Application Architectures

It’s not a coincidence that Docker exploded in popularity just as
application architectures were themselves changing. Driven by the
global internet, cloud, and the explosion of mobile apps, application
services are increasingly designed for internet scale. Cloud-native
applications are comprised of multiple connected components that are
resilient, horizontally scalable, and wired together via secured virtual
networks. As these distributed, modular architectures have become the
norm, Docker has emerged as a preferred way to package and deploy
application components. As Docker has matured, the emphasis has shifted
from the management of the containers themselves to the orchestration
and management of complete, ready-to-run application services. For
developers and QA teams, the potential for productivity gains are
enormous. By being able to spin up fully-assembled dev, test and QA
environments, and rapidly promote applications to production, major
sources of errors, downtime and risk can be avoided. DevOps teams
become more productive, and organizations can get to market faster with
higher quality software. With opportunities to reduce cost and improve
productivity, Docker is no longer interesting just to technologists –
it’s caught the attention of the board room as well.

New Opportunities and Challenges for the Enterprise

Done right, deploying a containerized application environment can bring
many benefits:

  • Improved developer and QA productivity
  • Reduced time-to-market
  • Enhanced competitiveness
  • Simplified IT operations
  • Improved application reliability
  • Reduced infrastructure costs

While Docker provides real opportunities for enterprise deployments, the
devil is in the details. Docker is complex, comprised of a whole
ecosystem of rapidly evolving open-source projects. The core Docker
projects are not sufficient for most deployments, and organizations
implementing Docker from open-source wrestle with a variety of
challenges including management of virtual private networks, managing
databases and object stores, securing applications and registries, and
making the environment easy enough to use that it is accessible to
non-specialists. They also are challenged by skills shortages and
finding people knowledgeable about various aspects of Docker
administration. A business guide to effective
container app management –
Compounding these challenges, orchestration technologies essential to
realizing the value of Docker are also evolving quickly. There are
multiple competing solutions, including Kubernetes, Docker Swarm and
Mesos. The same is true with private cloud management frameworks.
Because Docker environments tend to grow rapidly once deployed,
organizations are concerned about making a misstep, and finding
themselves locked into a particular technology. In the age of rapid
development and prototyping, what is a sandbox one day may be in
production the next. It is important that the platform used for
evaluation and prototyping has the capacity to scale into production.
Organizations need to retain flexibility to deploy on bare-metal, public
or private clouds, and use their choice of orchestration solutions and
value-added components. For many, the challenge is not whether to deploy
Docker, but how do so cost-effectively, quickly and in a way that
minimizes business and operational risk so the potential of the
technology can be fully realized.

Reaping the Rewards with Rancher

In a sense, the Rancher® container management platform is to Docker what
Docker is to containers: just as Docker makes it easy to package,
deploy and manage containers, Rancher software does the same for the
entire application environment and Docker ecosystem. Rancher software
simplifies the management of Docker environments helping organizations
get to value faster, reduce risk, and avoid proprietary lock-in.
Written with a
technology and business audience in mind, in a recently published
whitepaper, Unlocking the Value of Docker in the Enterprise,
Rancher Labs explores the challenges of container management and
discusses and quantifies some of the specific areas that Rancher
software can provide value to the business. To learn more about Rancher,
and understand why it has become the choice of leading organizations
deploying Docker, download the whitepaper and
learn what Rancher can do for your business.

[1]
http://assets.rightscale.com/uploads/pdfs/rightscale-2016-state-of-the-cloud-report-devops-trends.pdf
[2]
https://www.twistlock.com/2016/09/23/state-containers-industry-reports-shed-insight/

Tags: ,, Category: Rancher Blog Comments closed

Transform your business with these top storage sessions at SUSECON16

Friday, 14 October, 2016

With all eyes on the upcoming election in November that has the world glued to various news outlets watching and wondering which party will trump the other (pun intended), we could all use a break from the political banter that has dominated conversations.  While Republicans and Democrats alike are begging you to join them, I have a better idea.  Ditch the elephants and donkeys and go chameleon green.  Join the SUSE Party at SUSECON 2016 starting Monday, November 7th in Washington, D.C.  The event will feature over 150 sessions, 100+ hours of hands-on technology sessions, 20+ expert led demo sessions and complimentary certification exams.  If storage is your thing, then today is definitely your day.  Below I’ve included my personal list of “can’t miss sessions”, all centered around storage and IT transformation.  Have a look:

TUT91573 – Demystifying Kubernetes: An introduction for Sysadmins & Co.

Tuesday, Nov 8, 2:00 pm – 3:00 pm

Federica Teodori – Project Manager, SUSE

Miquel Sabaté – Software Engineer, SUSE

As more and more users are starting to consider Docker in production environments, people have realized that having Docker alone is not enough. Instead, the community is gearing towards orchestration solutions: tools, frameworks and practices that deal with how containers are deployed on production and how administrators can monitor all this without going crazy. Join us for this brief journey into SUSE’s orchestration choice: Kubernetes.

CAS91463 – Tales from the Trenches: Ceph in the Enterprise with Novacoast

Tuesday, Nov 8, 4:45 pm – 5:45 pm

Thursday, Nov 10, 3:15 pm – 4:15 pm

Daniel Harbison , Novacoast

Dan Elder – Linux Services Manager, Novacoast

While not every organization has yet embraced software defined storage, we’re going to discuss one who has. Novacoast is a SUSE customer who migrated from 3 legacy SAN environments to a single stretched Ceph cluster powered by SUSE Enterprise Storage. We’ll cover why the decision was made to migrate to Ceph, why SUSE Enterprise Storage was chosen, and lessons learned from the migration process. Future plans for the SUSE Enterprise Storage environment will also be discussed as will how the solution has enabled Novacoast to cut its storage budget in half.

TUT91467 – Docker + Ceph = Happiness

Thursday, Nov 10, 2:00 pm – 3:00 pm

John Walls, Novacoast

Dan Elder – Linux Services Manager, Novacoast

Docker and other container technologies offer an exciting path forward to modernize application architectures around microservices and stateless environments. This allows for all kinds of security benefits (which we’ll discuss), but your data still needs to live somewhere and be accessible everywhere. Ceph is an ideal software defined storage platform for keeping your valuable data accessible to your Docker environment particularly with technologies like CephFS now part of SUSE Enterprise Storage. It’s easier than ever to build out a robust storage environment for container-based workloads. In this session, we’ll show you how.

The real election is happening at SUSECON and we have the hottest ticket on the hill.  Choose the truly “green” party and register now.  Already registered?  Check out the entire session catalog and be sure to include my suggestions.  See you in Washington!

SAP HANA and SUSE—A new generation of high-performance solutions for your digital business

Thursday, 29 September, 2016

SAP VideoToday’s biggest technology trends are creating exciting new possibilities for businesses. But how do you take full advantage of the opportunities without making your IT infrastructure too complex or losing control of your operations and maintenance costs?

If you ask SAP the answer is, “Run Simple.” In other words, run a simple, digital enterprise through thoughtful reduction using SAP HANA in combination with a simplified user interface (SAP Fiori). Moving to SAP HANA is also a move to Linux, which means that you move your infrastructure from the proprietary vendor business model to one that is open, scalable and flexible enough to respond to unpredictable opportunities and challenges.

Watch the video and discover:

  • A new generation of high-performance solutions that are ready to power your digital business
  • Why more than 95% of SAP HANA customers choose SUSE for their business-critical applications
  • How SUSE enhances SAP HANA’s built-in high-availability and disaster recovery feature
  • How to quickly leverage the flexible cloud services and infrastructures you need to support your growing business

Simple is hard. But simple is a competitive advantage.

Can I have one of those?

Wednesday, 21 September, 2016

If you are a retailer, you are considering or have implemented some of the new technologies that help track where people walk and what they pick up, register emotional responses or show payment transparencies.

All of this tracking produces a ton of data. And I mean a ton of data. Chances are, you are using SAP Business Warehouse as your data warehouse.

You should be interested in the latest benchmarking results that our partner, Hitachi Data Systems, achieved last month. On June 24, 2016, the company went through the SAP BW Advanced Mixed Load (BW-AML) Standard Application Benchmark and was certified at 2,000,000,000 performance benchmark by the SAP Benchmark Council. This performance was achieved using what Hitachi calls “everyday 4 socket server” running SUSE Linux Enterprise Server, the operating system that powers over 90% of SAP HANA deployments.

So all that data retailers are collecting? It’s a piece of cake to process. According to Parisa Fathi, “With the power of the SAP HANA platform, customers can access data that they could not previously access economically and in real time and Hitachi’s UCP is just the high performance converged platform they need for this environment. Hitachi’s end-to-end SAP HANA offerings provide customers with a full range of platform solutions including managed services from oXya and consulting services from HCC.”

So I invite you to see how Hitachi Data Systems can help you with your data warehousing and processing needs. And if you want to read the rest of what Parisi Fathi wrote, check out her blog post.

Survival of the Fittest: Keep Your Business in the Game with DevOps

Wednesday, 21 September, 2016

In today’s fast paced world, if you’re not first—you’re last.  Companies must be able to respond quickly to changes in both their internal and external environments, so adopting the latest and greatest technologies to enable them to do so is crucial to their survival.  Having the right culture and the right processes and tools for software and application development, as well as their delivery and maintenance, is not only necessary but essential for companies to differentiate themselves and succeed in every market.

goldfish jumping from an aquarium small and crowded to the largest

So how does one pull this off?  In his article, “How DevOps Can Support Business Agility for All Companies to Stay Business-Relevant”, Dr. Thomas Di Giacomo (Chief Technology Officer at SUSE) sheds some light. Di Giacomo focuses on some of the leading tools for DevOps and how the appropriate SUSE tools can support the DevOps phases to support continuous integration and continuous deployment.

Whether your company is already a DevOps master or the concept is completely foreign, business agility will continue to become more and more important and the tools to help improve agility will become even more advanced. So don’t just keep up, get ahead of the competition and ensure your company stays business relevant.  Check out the article to learn more.

When it’s Business Critical, Make the Smart Move!

Wednesday, 14 September, 2016

As IT and business leaders are continually driven to “transform the business”, they are often asked to do this with the same or even smaller budgets.  This challenge means that they must find a way to reduce costs for their existing infrastructure so that they can free up resources to invest in new technology – improving their ability to respond to changing business demands. With the high cost of proprietary platforms and software renewals, one of the first places these leaders look to reduce costs and improve their agility is with open-source.

rje0484_hi

Application migration to Linux should be part of the continuous improvement of your IT infrastructure, technologies and processes.  SUSE delivers industry-standard Linux solutions that give your IT organization the reliability, scalability, availability and security you need for the mission-critical databases and applications, helping you to:

  • Slash costs and accelerate ROI by consolidating applications and workloads on SUSE, letting you leverage commodity or open source platforms
  • Reduce complexity and increase uptime by consolidating databases and managing your entire Linux environment with a single management tool
  • Boost resource utilization and application efficiency by consolidating resources to better utilize floor space, reduce energy consumption, and save on software costs

Moving applications and databases to Linux helps you reduce costs and enables you to be more adaptable – quickly responding to the changing demands of the business.  And this we know is critical for a company’s survival!

To learn more about how SUSE can help you with application migration, please visit https://www.suse.com/appmigration

SUSE to unlock technical opportunities within your business

Thursday, 30 June, 2016

Getting your skills up to date or becoming an expert even, is something we all fancy. The challenge however is that our time is often limited in this fast pacing world. Because I know what it’s like to have limited time available while trying to keep my skills up high, I like to point out our SUSE
On-demand training option. On-demand training is a powerful tool that helps to get the job done!

With the valuable content of our On-demand Training you can focus on the topics you need so you are sure to get the most benefit out of it. The online courses can be accessed from any web-enabled device and are modular and searchable. On-demand Training is available as a subscription and entitles a single user access to On-demand Training for a 12-month period from the date of purchase.

Benefits overview:

  • Value With flexible and affordable subscription plans, On-demand Training delivers high-quality training content at a low cost.
  • Convenience We have built a reputation for providing industry-leading training. Now, we’re making training available any time, from any location, on any web-enabled device.
  • Quality On-demand Training offers a rich, interactive, and complete online learning environment that includes expert instructors and video demonstrations.
  • Content With more than 100 current courses for administrators and technical experts, we provide a diverse and ever-growing library covering the spectrum of SUSE® solutions.

Interested in getting certified or improving your skills? Click here to explore your options or watch this Introductory video for On-demand Training.