Test Driving HANA on AWS [Webinar]

Thursday, 15 September, 2016

Pressure to compete at a faster pace and diminishing IT budgets combined with the culture of ‘everything on-demand’ is driving new levels of digital transformation across all sizes of business. Larger organizations especially are feeling heightened expectations from customers and employees alike to compete and deliver innovation at high velocity. In a recent blog post from Freek Hemminga, Head of EMEA Marketing at SUSE, he describes the challenge in a recent blog as an opportunity for organizations to ‘Excel by Accelerating.’ However this type of opportunity doesn’t come around twice, so how can even the largest of SAP-centric organizations ensure they are navigating the move towards digital transformation in a way that minimizes risk?

Leveraging the cloud delivery model for economic efficiency, scalability, and accessibility has been a hot topic for years and we are finally seeing these conversations turn into strategies and success stories. Divisions and business units from some of the largest multi-nationals are now identifying areas of the landscape that can be migrated to cloud and many even adopting a ‘cloud-first’ strategy for certain workloads which enable IT to build and manage services instead of infrastructure. Building cloud competencies, however, is a challenging investment due to the availability of these skillsets in the market today.

The ability to deliver forward-thinking solutions is something that our friends at Protera Technologies have been doing for SAP-centric organizations for almost two decades. Consider them a trusted navigator for adopting the public cloud and migrating and running your SAP, SAP HANA and S/4 HANA applications. From mid-sized multi-national chemical manufacturers to one of the world’s largest candy companies they have been helping billion dollar businesses adopt the cloud for years and are a key consulting partner for AWS in helping enterprises move SAP Applications to their platform.

We’re excited to present a live session exploring these topics, and invite you to join us later this month in an interactive session to learn:

  • Where to start your Cloud and HANA journey?
  • A migration checklist that makes sense
  • Cloud and HANA Deployment options for your SAP landscape

Register now for the “Migrating SAP Applications to HANA on AWS” live and on-demand webinar to hear Patrick Osterhaus, President and CTO of Protera Technologies, on cloud migration best practices for SAP Applications to AWS and what organizations can achieve by moving to the cloud. David Rocha, Cloud Sales Engineer at SUSE, will be co-presenting and giving insight into why SUSE Linux Enterprise Server for SAP Applications is far andaway the preferred choice for SAP applications on HANA whether on-premise or in the cloud. Register here: http://bit.ly/2clTUHr

Date: September 29, 2016

migrating-sap-applications-to-sap-hana-on-aws-protera-suse.png

3 Reasons Why the Future of Storage is Open Source and Cloud

Friday, 2 September, 2016

As predicted by a number of key analysts  the market has significant growth in software defined storage during 2016— and with solid reasoning. The capacity to pool storage across different arrays and applications is the latest wave in virtualization and is beginning to have the same impact on the cost and upgrade cycle for storage as it has had on servers.

While the growth of software defined storage isn’t good news for traditional  array vendors, it is very good news for IT teams, who stand to gain unlimited scale, reduced costs and non-proprietary management. Open source means the end of vendor lock-in and true cloud portability. The future of storage is open source and cloud. Here are three reasons why:

  1. What IT teams have learned to expect from virtualization: reduced cost, reduced vendor dependence. A decade or so ago, date centers looked very different from today. Each application had its own servers and storage working in a series of technological islands, with each island provisioned with enough processing power to run comfortably during peak demand. Inevitably, making sure systems ran comfortably at peak meant over-provisioning: the processing requirements had to be based on the worst case scenario, effectively providing for seasonal peaks like Christmas all year round. For years IT added application after application, requiring server after server, rack upon rack, over an ever greater floor space, running up an ever increasing electricity bill from epic power and cooling costs. This was so great that some companies could use the data center to heat their buildings, and others placed data centers in the cold air of mountain sides to reduce costs.

    With every server added, the amount of idle processing power grew until the unused potential became massive. The effect was somewhat like placing a dam across a mighty river: the tiniest trickle of water escapes in front while the energy potential building in the lake behind grows and grows. Virtualization opened the sluice gates, unleashing a torrent of processing power that could be used for new applications. This meant power on demand at the flick of a switch, fast provisioning, doing more with less, lower energy bills, a reduced data center footprint and the severing of the link between the software supplier and the hardware. Expensive proprietary servers were out; commodity servers differentiated only by price were in. In this world the best server is the cheapest because now they are all the same. Best of all, there’s a huge drop in the number of new physical servers required. And with all that unused potential available why add more?

    Virtualization became a “no brainer,” a technology with a business case so sound, so obvious, so clear that adoption was immediate and near universal. For the IT team, it means making better use of IT resources, reducing vendor lock-in and, above all, cost savings. Put the v-word in front of anything, and IT expects the vendor to show how they are going to be able to do more with less, for less. Years of experience and working best practice have led IT teams to make virtualization synonymous with cost reduction. Storage is no exception. Any vendor talking storage virtualization while asking for increased investment is going to have a very short conversation with their customers.

  1. Storage virtualization disrupts traditional vendor business models. While IT has reaped the benefits of better resource use and cost reductions, this has come at the expense of sales for vendors. As adoption of server virtualization took off, server sales plummeted, moving from a steady gain in volume and value every quarter to a catastrophic drop. In 2009, with the recession in full force (itself a significant driver of virtualization for cost savings) analysts at IDC recorded the first ever drop in server sales. All the big players, HP, IBM, Dell, Sun Microsystems (now Oracle) and Fujitsu, recorded huge decreases in sales, between 18.8 percent and 31.2 percent year over year. The impact was softer in high power CISC and RISC segments where it was tougher for IT teams to change vendors (e.g., with mission critical Oracle applications where licensing costs were tied to the number of processors in use or specific hardware), but especially severe in the lower end x86 market.

    Changes followed suit. IBM exited the commodity market altogether, selling out to Lenovo, which with a cheaper manufacturing base built on lower wages and controlled exchange rates were in a better position to win. HP endured a merry-go-round of revolving door CEOs and successive re-inventions, and Dell went private. This pattern of disruptive change is set to follow into the storage marketplace. When even the very largest suppliers suffer in this way, an expectation builds of disruptive, game-changing technology. IT buyers stop looking at brand in the same way. Where there used to be a perception of safe partners with a long-term, safe product road maps and low risk, there is now an expectation that the older players are going to be challenged by new companies with new approaches and technologies. The famous 70s slogan “no-one ever got fired for buying IBM” doesn’t hold water when IBM shuts up shop and sells its commodity servers business. IT buyers expect the same disruption in storage, and they are right to do so.

    In this environment, the status quo for storage vendors cannot hold. The big players are nervously eying each other, waiting for the deciding moves in what adds up to a game of enterprise business poker with astronomical odds. The old proprietary business model is a busted flush, and they all know that sooner or later someone will call their bluff on price and locked-in software. A new player in the game, or even somebody already at the table, is going to bring the game into a new phase—or, as distinguished Gartner analyst and VP Joe Skorupa put it, “throw the first punch” in 2016.

“Smart storage buyers need data portability to have an exit plan, and open source provides it. Storage powered by Ceph is easily transferred across hundreds of different providers, including Amazon.”

  1. Cloud makes the case for open source compelling because data must be portable. Just at the point where server sales might have been expected to recover, IT teams discovered the cloud. Why bother maintaining an enormous hardware estate with all the hassle of patching and managing, upgrading, retiring and replacing if you can offload that workload cost-effectively onto a third party and so free up time to concentrate on more rewarding activity? For ambitious CIOs wanting to generate business advantage for the board, “keeping the lights on” in the data center is a distant priority. It’s no wonder more and more infrastructure is moving into the cloud, and with it, data. And with the data goes storage.

    IT teams who want to avoid being locked into cloud suppliers need to think carefully about how they exit one provider and move to another. Smart buyers need to play suppliers off against each other, compare prices and offerings and choose whichever is the best fit for current requirements, knowing that those requirements can change. A better offer can come along, and, if you are going to be in a position to seize on it, you must be able to exit your current supplier without a disruptive, costly and risky migration. If this goal is to be achieved, data must be portable.

    Smart storage buyers need data portability to have an exit plan, and open source provides it. Storage powered by Ceph is easily transferred across hundreds of different providers, including Amazon.

    Enter software defined storage from SUSE® powered by Ceph.

    Software defined storage separates the physical storage plane from the data storage logic (or control plane). This approach eliminates the need for proprietary hardware and can generate 50 percent cost savings compared to traditional arrays and appliances.

    SUSE Enterprise Storage is powered by Ceph technology, the most popular OpenStack distributed storage solution in the marketplace. It is extensively scalable from storage appliance to cost-effective cloud solution. It is portable across different OpenStack cloud providers.

    Ceph’s foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block and file system storage in a single unified storage cluster. This makes Ceph flexible, highly reliable and easy for you to manage.

    Ceph’s RADOS provides extraordinary data storage scalability—thousands of client hosts or KVMs accessing petabytes to exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously. This means your Ceph storage system serves as a flexible foundation for all of your data storage needs.

    Ceph provides industry-leading storage functionality such as unified block and object, thin provisioning, erasure coding and cache tiering. What’s more, Ceph is self-healing and self-managing.

    With Ceph’s powerful capabilities, SUSE Enterprise Storage is significantly less expensive to manage and administer than proprietary systems. It will enable you to effectively manage even a projected data growth rate of 40–50 percent in your organization without exceeding your established IT budget.

 

 

The Innovation Journey – 25 years of Linux – the future is here

Monday, 29 August, 2016

By Danny Rowark, SUSE

Unless you’re Matt Damon in “The Martian” stranded on Mars, you will have noticed that the Olympics has just finished in Rio!  You will also note that TeamGB collected their greatest medal haul for nearly 100 years, and once again celebrated in the velodrome for the cycling.  At the Barcelona Olympics in 1992, Chris Boardman won our only cycling Gold for the “Track Pursuit”, the first cycling gold for 72 years. Dave Brailsford then joined TeamGB Cycling and introduced “The Aggregation of Marginal Gains” and took us all on a journey of innovation and success.

25 years ago, (a year before Boardman was winning his gold), Linus Torvalds started his “Linux” project as “just a hobby”. Roughly 9 years later Linux became “Enterprise Linux”. This was made possible by the large community of enthusiasts which came together to create something great and non-profit.

DannyRowark Blog

Working at IBM in the 90s, I witnessed just how Linux became “Enterprise” on the IBM mainframe. I never imagined that some 20 years later in the UK, I would be planning an event at Mercedes World in Weybridge with SAP (and partners) to take our customers on a journey around digital transformation and the software-defined data centre (SDDC).

Today Linux is present in nearly every enterprise data centre and there is still more to come: big data & analytics, SAP HANA, S4/-HANA and cloud computing are just a few examples of the role Linux has to play in the current trend for digital transformation.

But why is SUSE Linux Enterprise Server present in so many data centres running mission critical SAP applications? The answer goes back to SAP and SUSE’s conjoined history. In 1999 SAP founded the Linux Lab to develop an Enterprise version of Linux capable of running mission-critical SAP applications. From the very beginning SUSE was a leading Linux partner. Today, SUSE Linux Enterprise Server has for 10 years now acted as the reference development architecture for SAP’s ongoing technological innovation through SAP NetWeaver and SAP HANA.

Throughout that decade, the SAP community has undergone a lively and fascinating journey. Topics such as consolidation, harmonisation, automation and virtualisation have all been successfully implemented. Based on the technology available today, the S/4HANA future builds on the software-defined data centre. Studies from various analysts reveal that the journey to the SDDC (SDDC) is in full swing. What is more, it is clear that the SDDC will provide tangible business benefits for SAP customers, including improved agility and reduced op-ex (operational expenditure).

A pain-free transition: cloud plus Linux with SUSE and SAP

SAP is ready: S/4HANA Cloud is the future. But how do enterprises ensure an easy shift to the cloud with SAP? There are several options. The good news for SAP HANA customers is that they only need to decide which sourcing model they want to use: private cloud (S/4 HANA on premise, based on Intel x86 or IBM Power) or private cloud as a managed service (SAP’s HANA Enterprise Cloud, or HEC.)

For public cloud environments, S/4HANA public cloud can be hosted on the SAP HANA Cloud Platform (HCP), as well as Amazon Web Services (AWS) and Microsoft Azure. In future, even more providers such as Google or Telekom will be offering their services. For most real-world deployments, hybrid cloud is likely to be the most common approach. In this scenario, parts of the SAP landscape, like production, will remain in-house while others, such as testing, will move to the public cloud to benefit from the improved scalability and flexibility offered by public cloud environments.

No matter how you decide to run SAP HANA or S/4HANA Cloud, SUSE Linux Enterprise Server is everywhere. Not only is it the operating system for the SAP HANA Enterprise Cloud (private) and SAP HANA Cloud Platform (public), but it also works with AWS and Azure public clouds, running on x86 and IBM Power (on- premise).

From Linus Torvald’s first post 25 years ago to thousands of SAP customers running mission-critical SAP HANA and SAP applications on SUSE (and continuous success in the velodrome), we’ve come a long way in the past quarter of a century. But there is much, much more to come! No matter what lies ahead, you can be sure you are well prepared for digital transformation with SUSE and SAP.

The Agile Penguinz – Blending DevOps

Monday, 8 August, 2016
The IBM LinuxONE Emperor and Rockhopper Penguins Go to Work

The IBM LinuxONE Emperor and Rockhopper Penguins Go to Work

A lifetime ago, I was a VM developer anxious to make my mark on the programming world by producing leading-edge applications that I thought surely everyone would scramble to use and share.  At the time, it was imperative to establish a good rapport with IT in order to complete the deployment cycle and get anyone’s attention.  Even back then, there was a constant battle between my team – with rapid development of apps – and operations (aka data center IT).  There were definitely different mindsets – with app development we dealt with dynamic business processes and rapid change-driven delivery, while IT operations focused on reliability, mission-critical workloads, and intense fiscal scrutiny.  And it was frustrating to me with fighting the notion that change is complex, slow and expensive.

Of course, that was a long time ago, but we still see these internal battles occurring in businesses today across industries, as the struggles to meet changing business demands and reduce time to market continue.  The common tools provided by SUSE help transform those battlegrounds into collaborative environments needed to move forward.

One year ago this month, IBM delivered two “Linux-your-way” mainframes under the IBM LinuxONE brand.  These new offerings provide a great platform for SUSE to deliver a stack of tools and applications targeted at blending the different mind sets of Development and Operations and allowing businesses to:

  • Reduce time to market by providing quick and easy access to developers – for building prototypes and experimenting with new technologies, so they can build and deliver the right product through fast experimentation
  • Improve efficiency of the DevOps organization – through the automation and orchestration of IT processes to ensure continuous integration/continuous delivery repeatability, consistency and velocity to corporate IT and security standards, treating infrastructure as code
  • Meet changing business demands – by implementing and leveraging new architectures (e.g. cloud, storage) and open source technologies (e.g. containers) enabling delivery of IT services on demand in support of your DevOps process

The IBM LinuxONE with SUSE Linux platform provides a compelling collaborative environment for blending DevOps with the right tools for both rapid, dynamic development and reliable, secure life cycle management.  If I had the same flexibility and collaboration back in my days as a VM developer, I am sure that things would have been much different – and truly agile, before the term “agile development” was ever invented.

Learn more about SUSE Linux Enterprise Server for z Systems and LinuxONE at https://www.suse.com/products/systemz

Keep in touch @JeffReserNC

5 Keys to Running Workloads Resiliently with Rancher and Docker – Part 1

Thursday, 4 August, 2016
Build a CI/CD Pipeline with Kubernetes and Rancher
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

Containers and orchestration frameworks like Rancher will soon allow
every organization to have access to efficient cluster management. This
brave new world frees operations from managing application configuration
and allows development to focus on writing code; containers abstract
complex dependency requirements, which enables ops to deploy immutable
containerized applications and allows devs a consistent runtime for
their code. If the benefits are so clear, then why do companies with
existing infrastructure practices not switch? One of the key issues is
risk. The risk of new unknowns brought by an untested technology, the
risk of inexperience operating a new stack, and the risk of downtime
impacting the brand. Planning for risks and demonstrating that the ops
team can maintain a resilient workload whilst moving into a
containerized world is the key social aspect of a container migration
project. Especially since, when done correctly, Docker and Rancher
provide a solid framework for quickly iterating on infrastructure
improvements, such as [Rancher

catalogs](https://docs.rancher.com/rancher/latest/en/catalog/) for

quickly spinning up popular distributed applications like
ElasticSearch.
In regard to risk management, we will look into identifying the five
keys to running a resilient workload on Rancher and Docker. The topics
that will be covered are as follows:

  • Running Rancher in HA Mode (covered in this post)
  • Using Service Load Balancers in Rancher
  • Setting up Rancher service health checks and monitoring
  • Providing developers with their own Rancher setup
  • Discussing Convoy for data resiliency

I had originally hoped to perform experiments on a Rancher cluster
built on a laptop using Docker Machine with a Rancher
Server
and various
Rancher Agents on Raspberry Pis. Setup instructions
here.
The problem is that most Docker images are made for Intel based CPUs, so
nothing works properly on Pi’s ARM processors. Instead I will directly
use AWS for our experiments with resilient Rancher clusters. With our
initial setup, we have 1 Rancher Server and 1 Agent. Let’s deploy a
simple multiple container application. Rancher HA Experiment Diagram
The above diagram illustrates the setup I am going to use to experiment
with Rancher. I chose AWS because I am familiar with the service, but
you can choose any other provider for setting up Rancher according to
the Quick Start
Guide
.
Rancher Machine Creation
Let’s test our stack with the WordPress
compose

described in the Rancher Quick Start instructions. Rancher HA
So now our application is up and running, the one scenario is what
happens if the Rancher Server malfunctions? Or a network issue occurs?
What happens to our application? Will it still continue serving
requests? WordPress up
For this experiment, I will perform the following and document the
results.

  • Cutting the Internet from Rancher Agent to Rancher Server
  • Stopping the Rancher Server Container
  • Peeking under the hood of the Rancher Server Container

Afterwards we will address each of these issues, and then look at
Rancher HA as a means of addressing these risks.

Cutting the Internet from Rancher Agent to Rancher Server

So let’s go onto AWS and block all access to the Rancher Server from my
Rancher Agents.

  • Block access from Rancher Server to Rancher Agent
  • Note down what happens
  • Kill a few WordPress containers
  • Re-instantiate the connection

Observations:

Firstly, after a few seconds our Rancher hosts end up in a reconnecting
state. Turn off Rancher Server
Browsing to my WordPress URL I can still access all my sites properly.
There is no service outage as the containers are still running on the
remote hosts. The IPSec tunnel between my two agents is still
established, thus allowing my lone WordPress container to still connect
to the DB. Now let’s kill a WordPress container and see what happens.
Since I can’t access my Rancher Agents from the UI, I will be SSHing
into the agent hosts to run Docker commands. (Instructions for SSHing
into Rancher-created hosts can be found
here)
Turning off Rancher Server
The WordPress container does not get restarted. This is troublesome, we
will need our Rancher Server back online. Let’s re-establish the network
connection and see if the Rancher Server notices that one of our
WordPress services is down. After a few moments, our Rancher Server
re-establishes connection with the agents and restarts the WordPress
container. Excellent. So the takeaway here is that Rancher Server can
handle intermittent connection issues and reconnect to the agents and
continue on as usual. Although, for reliable uptime of our containers we
would need multiple instances of Rancher Server on different hosts for
resiliency against networking issues in the data center. Now, what would
happen if the Rancher Server dies? Would we lose all of our ability to
manage our hosts after it comes back? Let’s find out!

Killing the Rancher Server

In this second experiment I will go into the Rancher Server host and
manually terminate the process. Generally a failure will result in the
Docker process restarting due to –restart=always being set. Though
let’s assume that either your host ran out of disk space or otherwise
borked itself.

Observations:

Let’s simulate catastrophic failure, and nuke our Rancher container.
sudo docker stop rancher-server As with the network experiment our
WordPress applications still run on the agents and serve traffic
normally. The Rancher UI and any semblance of control is now gone. We
don’t like this world, so we will start the rancher-server back up.
sudo docker start rancher-server After starting up again, the Rancher
server picks up where it left off. Wow, that is cool, how does this
magic work?

Peeking under the hood of the Rancher Server Container

So how does the Rancher Server operate? Let’s take a brief tour into the
inner working of the Rancher server container to get a sense of what
makes it tick. Taking a look at the Rancher Server Docker build file
found here.
Rancher Server Components

# Dockerfile contents
FROM ...
...
...
CMD ["/usr/bin/s6-svscan", "/service"]

What is s6-svscan? It is a supervisor process that keeps a process
running based on commands found in files in a folder; these key files
are named as Run, Down, and Finish. If we look inside the service
directory we can see that the container will install dependencies and
use s6-svscan to start up 2 services. Rancher Server Components - Service
The Cattle service, which is the core Rancher scheduler, and a MySQL
instance. Inside our container the following services are being run.

PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:00 /usr/bin/s6-svscan /service
    7 ?        S      0:00 s6-supervise cattle
    8 ?        S      0:00 s6-supervise mysql
    9 ?        Ssl    0:57 java -Xms128m -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/cattle/logs -Dlogback.bootstrap.level=WARN -cp /usr/share/cattle/1792f92ccdd6495127a28e16a685da7
  135 ?        Sl     0:01 websocket-proxy
  141 ?        Sl     0:00 rancher-catalog-service -catalogUrl library=https://github.com/rancher/rancher-catalog.git,community=https://github.com/rancher/community-catalog.git -refreshInterval 300
  142 ?        Sl     0:00 rancher-compose-executor
  143 ?        Sl     0:00 go-machine-service
 1517 ?        Ss     0:00 bash
 1537 ?        R+     0:00 ps x

We see that our Rancher brain is a Java application named Cattle, which
uses a MySQL database embedded within its container to store state. This
is quite convenient, but it would seem that we found the single point of
failure on our quick-start setup. All the state for our cluster lives in
one MySQL instance which no one knows existed. What happens if I nuke
some data files?

Corrupting the MySQL Store

Inside my Rancher server container I executed MySQL commands. There is a
certain rush of adrenaline as you execute commands you know will break
everything.
docker exec -it rancher-server bash $ > mysql mysql> use cattle; mysql> SET FOREIGN_KEY_CHECKS = 0; mysql> truncate service; mysql> truncate network;
Lo and behold, my Rancher service tracking is broken, even when I kill
my WordPress containers they do not come back up, because Rancher no
longer remembers them. Loss of data - 1
Since I also truncated the network setup tables, my WordPress
application no longer knows how to route to its DB. Loss of data - 2
Clearly, to have confidence in running Rancher in production, we need a
way to protect our Rancher Server’s data integrity. This is where
Rancher HA comes in.

Rancher HA Setup Process

The first order of business is we need to secure the cluster data. I
chose AWS RDS for this because it is what I am familiar with — you can
manage your own MySQL or choose another managed provider. We will
proceed assuming we have a trusted MySQL management system with backups
and monitoring. Following the HA setup steps documented in Rancher:
Rancher HA Setup
As per the setup guide, we create an AWS RDS instance to be our data
store. Once we have our database’s public endpoint, the next step is to
dump your current Rancher installation’s data, and export it to the new
database. High Availability Setup
For this I created an RDS instance with a public IP address. For your
first Rancher HA setup I recommend just making the database public, then
secure it later with VPC rules. Since Rancher provides an easy way to
dump the state, you can move it around to a secured database at a later
time. Next we will set up our Rancher Server to use the new database.
Rancher HA Setup - Database
After Rancher detects that it is using an external database, it will
open up 2 more options as part of setting up HA mode. (At this point, we
have already solved our point of failure, but for larger scale
deployments, we need to go bigger to lower risk of failure.) Rancher HA Setup - Config
Oh no, decision! — but no worries, let’s go through each of these
options and their implications. Cluster size, notice how everything
is odd? Behind the scenes, Rancher HA sets up a ZooKeeper Quorum to keep
locks in sync (More on this in the appendix). ZooKeeper
recommends odd numbers because an even number of servers does not
provide additional fault tolerance. Let’s pick 3 hosts to test out the
feature, as it is a middle ground between usefulness and ease of setup.
Host registration URL, well this section is asking us to provide the
Fully Qualified Domain Name (FQDN) of our Rancher HA cluster. The
instructions recommend an external loadbalancer or a DNS record that
round robins between the 3 hosts. Rancher HA Setup - DNS
The examples would be to use a SRV
Record
on your DNS provider
to balance between the 3 hosts; or an ELB on AWS with the 3 Rancher EC2
instances attached; or just a plain old DNS record pointing to 3 hosts.
I choose the DNS record for my HA setup as it is the simplest to setup
and debug. Now anytime I hit https://rancher.example.com my DNS
hosting provider will round robin requests between the 3 Rancher hosts
that I defined above. SSL Certificate is the last item on the list.
If you have your own SSL certificate on your domain then you can use it
here. Otherwise Rancher will provide a self-signed certificate instead.
Once all options are filled, Rancher will update fields in its database
to prepare for HA setup. You will then be prompted to download a
rancher-ha.sh script.

WARNING Be sure to kill the Rancher container you used to generate the
rancher-ha.sh script. It will be using ports that are needed by the
Rancher-HA container that will be spun up by the script.

Next up, copy the rancher-ha.sh script onto each of the participating
instances in the cluster and then execute them on the nodes to setup HA.

Caveat! Docker v1.10.3 is required at the time of writing. Newer
version of Docker is currently unsupported for the rancher-ha.sh
script.

You can provision the correct Docker version on your hosts with the
following commands:

#!/bin/bash
apt-get install -y -q apt-transport-https ca-certificates
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y -q docker-engine=1.10.3-0~trusty

# run the command below to show all available versions
# apt-cache showpkg docker-engine

After Docker, we need to make sure that our instances can talk to each
other so make sure the ports listed on the Rancher multi-node requirements
page are open.

Advice! For your first test setup, I recommend opening all ports to
avoid networking-related blockers.

Once you have the correct prerequisites, you can run the rancher-ha.sh
script on each participating host. You will see the following output.

...
ed5d8e75b7be: Pull complete
ed5d8e75b7be: Pull complete
7ebc9fcbf163: Pull complete
7ebc9fcbf163: Pull complete
ffe47ea37862: Pull complete
ffe47ea37862: Pull complete
b320962f9dbe: Pull complete
b320962f9dbe: Pull complete
Digest: sha256:aff7c52e52a80188729c860736332ef8c00d028a88ee0eac24c85015cb0e26a7
Status: Downloaded newer image for rancher/server:latest
Started container rancher-ha c41f0fb7c356a242c7fbdd61d196095c358e7ca84b19a66ea33416ef77d98511
Run the below to see the logs

docker logs -f rancher-ha

This is where the rancher-ha.sh script creates additional images that
support the HA feature. Due to the addition of components to the Rancher
Server, it is recommended to run a host with at least 4 GB of memory. A
docker ps of what is running after running the rancher-ha.sh script is
shown here. Rancher HA Setup - Enabled

Common Problems and Solutions

You may see some connection errors, so try to run the script on all 3
hosts first. You should see logs showing members being added to the
Rancher HA Cluster.

time="2016-07-22T04:13:22Z" level=info msg="Cluster changed, index=0, members=[172.30.0.209, 172.30.0.111, ]" component=service
...
time="2016-07-22T04:13:34Z" level=info msg="Cluster changed, index=3, members=[172.30.0.209, 172.30.0.111, 172.30.0.69]" component=service

Sometimes you will see a stream of the following error lines.

time="2016-07-23T14:37:02Z" level=info msg="Waiting for server to be available" component=cert
time="2016-07-23T14:37:02Z" level=info msg="Can not launch agent right now: Server not available at http://172.17.0.1:18080/ping:" component=service

This is the top level symptom of many issues. Here are some other issues
I have identified by going through the GitHub issues list and various
forum posts: Security Group Network issues Sometimes your nodes are
binding on the wrong
IP

so you would want to coerce Rancher to broadcast the correct
IP
.
ZooKeeper not being up It is possible that the ZooKeeper Docker
container is not able to communicate with the other nodes, so you would
want to verify
ZooKeeper

and you should expect to see this sample
output
.
Leftover files in the /var/lib/rancher/state directory from previous
HA attempt
If you ran the rancher-ha.sh multiple times then you may
need to clean up old state
files
.
Broken Rancher HA setup state from multiple reattempts Drop
Database

and try again. There is a previous issue with detailed
steps

to try to surface the issue. Insufficient Resources on the machine
Since Rancher HA runs multiple Java processes on the machine, you will
want to have at least 4 GB of memory. While testing with a t2.micro
instance with 1 GB the instance became inaccessible due to being memory
constrained. Another issue is that your database host needs to support
50 connections per HA node. You will see these messages when you attempt
to spin up additional nodes.

time="2016-07-25T11:01:02Z" level=fatal msg="Failed to create manager" err="Error 1040: Too many connections"

Mismatched rancher/server:version By default the rancher-ha.sh
script pulls in rancher/server:latest, but this kicked me in the back
because during my setup, Rancher pushed out rancher/server:1.1.2 so I
had two hosts running rancher/server:1.1.1, and my third host was
rancher/server:1.1.2. This caused quite a headache, but a good takeaway
is to always specify the version of rancher/server when running the
rancher-ha.sh script on subsequent hosts.
./rancher-ha.sh rancher/server: Docker virtual network bridge was
returning wrong IP
This was the issue I ran into – my HA setup was
trying to check agent health on the wrong Docker interface.
curl localhost:18080/ping > pong curl http://172.17.0.1:18080/ping > curl: (7) Failed to connect to 172.17.0.1 port 18080: Connection refused
The error line is found on
rancher/cluster-manager/service
And the offending error call is found here in
rancher/cluster-manager/docker
What the code is doing is to locate the Docker Bridge and attempt to
ping the :18080 port on the exposed Docker port. Since my Docker bridge
is actually set up on 172.17.42.1 this will always fail. To resolve it I
re-instantiated the host because the multiple Docker installation seemed
to have caused the wrong bridge IP to be fetched. After restarting the
instance and setting the correct Docker bridge, I now see the expected
log lines for HA.

After Setting Up HA

time="2016-07-24T19:51:53Z" level=info msg="Waiting for 3 host(s) to be active" component=cert

Excellent. With one node up and ready, repeat the procedure for the rest
of the hosts. After 3 hosts are up, you should be able to access the
Rancher UI on the URL you specified for step 3 of the setup.

time="2016-07-24T20:00:11Z" level=info msg="[0/10] [zookeeper]: Starting "
time="2016-07-24T20:00:12Z" level=info msg="[1/10] [zookeeper]: Started "
time="2016-07-24T20:00:12Z" level=info msg="[1/10] [tunnel]: Starting "
time="2016-07-24T20:00:13Z" level=info msg="[2/10] [tunnel]: Started "
time="2016-07-24T20:00:13Z" level=info msg="[2/10] [redis]: Starting "
time="2016-07-24T20:00:14Z" level=info msg="[3/10] [redis]: Started "
time="2016-07-24T20:00:14Z" level=info msg="[3/10] [cattle]: Starting "
time="2016-07-24T20:00:15Z" level=info msg="[4/10] [cattle]: Started "
time="2016-07-24T20:00:15Z" level=info msg="[4/10] [go-machine-service]: Starting "
time="2016-07-24T20:00:15Z" level=info msg="[4/10] [websocket-proxy]: Starting "
time="2016-07-24T20:00:15Z" level=info msg="[4/10] [rancher-compose-executor]: Starting "
time="2016-07-24T20:00:15Z" level=info msg="[4/10] [websocket-proxy-ssl]: Starting "
time="2016-07-24T20:00:16Z" level=info msg="[5/10] [websocket-proxy]: Started "
time="2016-07-24T20:00:16Z" level=info msg="[5/10] [load-balancer]: Starting "
time="2016-07-24T20:00:16Z" level=info msg="[6/10] [rancher-compose-executor]: Started "
time="2016-07-24T20:00:16Z" level=info msg="[7/10] [go-machine-service]: Started "
time="2016-07-24T20:00:16Z" level=info msg="[8/10] [websocket-proxy-ssl]: Started "
time="2016-07-24T20:00:16Z" level=info msg="[8/10] [load-balancer-swarm]: Starting "
time="2016-07-24T20:00:17Z" level=info msg="[9/10] [load-balancer-swarm]: Started "
time="2016-07-24T20:00:18Z" level=info msg="[10/10] [load-balancer]: Started "
time="2016-07-24T20:00:18Z" level=info msg="Done launching management stack" component=service
time="2016-07-24T20:00:18Z" level=info msg="You can access the site at https://" component=service

Rancher HA Setup - Enabled
To get around issues regarding the self-signed HTTPS certificate, you
will need to add it to your trusted certificates. After waiting and
fixing up resource constraints on the DB, I then see all 3 hosts up and
running. Rancher HA Setup - Done

Conclusion

Wow, that was a lot more involved than originally thought. This is why
scalable distributed systems is a realm of PhD study. After resolving
all the failure points, I think setting up and getting to know Rancher
HA is a great starting point to touching state-of-the-art distributed
systems. I will eventually script this out into Ansible provisioning to
make provisioning Rancher HA a trivial task. Stay tuned!

Appendix

For any distributed system, there is an explicit way to manage state and
changes. Multiple servers need a process to coordinate between updates.
Rancher’s management process works by keeping state and desired state
in the database; then emitting events to be handled by processing
entities to realize the desired state. When an event is being processed,
there is a lock on it, and it is up to the processing entity to update
the state in the database. In the single server setup, all of the
coordination happens in memory on the host. Once you go to a multi
server setup, the additional components like ZooKeeper and Redis are
needed. Nick Ma is an Infrastructure Engineer who blogs about Rancher
and Open Source. You can visit Nick’s blog,
CodeSheppard.com, to catch up on practical
guides for keeping your services sane and reliable with open-source
solutions.

Build a CI/CD Pipeline with Kubernetes and Rancher
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.
Tags: ,,,, Category: Uncategorized Comments closed

SUSE Linux Enterprise and OpenStack: Have it All and Have it Now

Wednesday, 3 August, 2016

I want it all and I want it now! Not only is that a great rock anthem from the 1980s, but it also reflects the demands of the modern consumer.  Living in today’s mobile-centric, social media driven, internet enabled age, consumers expect fast and convenient access to services and products on their terms.  The business world simply has to keep up and seamlessly adapt in order to compete, thrive and even survive.  That’s not an easy task if you are starting with an existing traditional IT environment.

SLESOpenStackLoveChina UnionPay, headquartered in Shanghai, faced exactly this scenario and found the ideal solution by turning to OpenStack – running on top of SUSE Linux Enterprise Server.

China UnionPay is one of China’s leading bankcard/credit card providers who has experienced explosive growth in recent years. They needed a “flexible, reliable and secure” cloud platform to provide access to online resources for tens of thousands of new customers simultaneously.  They also needed to launch new innovative banking services faster than their competition to seize new business opportunities. Sound like a familiar challenge?

Thanks to OpenStack, built on the bedrock of SUSE Linux Enterprise Server, China UnionPay now has an agile, secure, stable and industry-compliant private cloud platform that supports consistent business growth. In their own words, “OpenStack is a very mature platform which we can count on the support of a large global community” and this new cloud platform offers the high availability needed to support critical business requirements.

cardsWhat does this all look like in practice?  China UnionPay currently runs some 4,000 virtual servers on just over 1,000 physical machines and has started migrating key business applications to the cloud platform, including an important online transaction system.

And the results?  How does the ability to design, develop and launch new banking solutions three times faster than in the past, along with a saving of ten percent in hardware costs sound?

Let me leave you with China UnionPay’s own conclusion: “The OpenStack and SUSE project has been a great success. The support we receive from SUSE is tailored to our specific requirements, giving us peace of mind that we can keep leveraging next-generation technologies without worrying about technical hurdles.”

Uptime Matters, for Cars and Data Centers

Tuesday, 2 August, 2016

outback

I have a 2003 Subaru Outback. I love the car. My wife loves the car. Living in Utah, we’ve used it to hit up the national parks, camping in the mountains, deserts and forests. This car has fostered many happy memories for our family.

But do you know what I really hate? Car downtime. My wife and I spent some serious time over the last few weeks debating and analyzing what we want to do with this car. We just passed 200k miles on our odometer and we are finding more and more that we are putting it in the shop for repairs. This car that we love is not giving us the uptime we need and is becoming costly and interrupting our schedule. We don’t dare take it on longer trips anymore, renting a car in some cases and incurring more costs.

Luckily, we aren’t using our cars to run a business. I’m not an Uber driver, or a courier, or even a pizza delivery man, but if I was, I would guarantee you I would want a more dependable car. Take that to the business level, and they work constantly to keep their fleet of vehicles maintained and running smoothly.

I think you can see where I am going with this. If you are running mission-critical workloads, you want them running on both hardware and software that will allow you the most uptime possible, while minimizing the downtime needed for maintenance. If your systems are going down regularly, and your hardware is unreliable, then you are experiencing that pain that I’m currently experiencing with my car. And not everyone has a budget for a luxury option.

There’s three ways SUSE suggests you can limit your downtime:

  • Prevent Hardware Downtime
  • Maximize Service Availability
  •  Minimize Human Mistakes

I think I’ve got most of my human driving mistakes handled (haven’t crashed it lately), but it is important to consider the hardware and software especially with a car. In the data center, it becomes a true question of business cost, ROI, vs reliability when it’s time to purchase new hardware and software, and put systems in place to make sure you aren’t making mistakes or taking the servers down for updates or patches when you don’t actually need to.

All and any downtime is costing you. It shuts down the production line, aborts transactions or even brings your core business to a standstill, impacting your revenues and reputation. Take a look at these six customer case studies and follow our three ways to limit downtime with SUSE Solutions, and I’ll look into getting the reliable vehicle my family needs for the future.

Enterprise Storage: How to Manage the Inevitable

Monday, 1 August, 2016

The vast majority of IT departments are experiencing enormous increases in the demand for storage and computing power. Few ― if any ― will have the budget to meet rising requirements that continue to outpace the growth in their budgets. This raises a difficult question for IT teams everywhere: how long is the usual approach of managing the install, upgrade, retire and replace cycle going to work?

By now, it should be obvious to all that the strategy that built the data center of the past isn’t going to deliver the data center of the future. New models and approaches are being embraced by the hyperscalers, based on open source software and commodity hardware. Cloud, we are told, has made IT a utility―as simple and as easy to manage as your gas bill. Yet, while we all know there are many advantages to paying by OpEx over CapEx, over time cloud can mean paying more ― just in smaller instalments.

As the changes come through, there is considerable risk for IT teams, who will need to best maximize their existing assets while frugally spending on future ones, by wisely navigating the gap between hype and reality.

In this foggy world, some things are crystal clear. Here are three:

  1. Outside of the “hyperscalers,” hardly anyone will be able to afford to own and host all their compute power on premise. In the future a proportion of your compute power is going to be in public clouds, one way or another, sooner or later.
  2. Storage growth is massive and unsustainable. You are going to need to find a better, cheaper way of doing it, and that way is going to need to work in harmony with your compute decisions.
  3. Vendor lock-in is never a good idea. In a world where business models change, discovering you’re locked into a cloud provider might well be one of the most unpleasant discoveries of your life.

There’s a joint conclusion that many in the industry have arrived on – the growth of the software-defined storage (SDS) industry, loosely defined as a method of storage that is organized by a software provider, regardless of hardware or location.

It’s well documented that SDS is the inevitable destination for much of your future storage needs.

  • In 2016, server-based storage solutions will lower storage hardware costs by 50 percent or more
  • By 2019, 70 percent of existing storage array products will also be available as software only versions
  • By 2020, between 70 and 80 percent of unstructured data will be held on lower-cost storage managed by SDS environments

So the question instead shifts to how SDS must be implemented and the types of needs it can serve for your data center. Here are a few common data center concerns, and how SDS can be deployed correctly to fit each need.

Agility

Business is moving too fast to rely on storage architectures that are proprietary, overpriced, and inflexibly. At the same time, IT is also challenged with organizing their storage assets as a bridge between new and old, with the same level of performance across locations and classes.

SDS should delivers storage functionality comparable to mid and high-end storage products at a fraction of the cost. It should be an open, self-healing, self-managing, storage solution that scales from a terabyte to multi-petabyte storage network. Coupling SDS with commodity off-the-shelf storage building blocks results amazingly cost-efficient storage. Truly unlimited scalability enables enterprise IT organizations to deliver the agility businesses demand by non-disruptively adding capacity at the cost they want to pay. Intelligent, self-healing, self-managing distributed storage enables storage administrators to minimize the amount of time spent managing storage. This enables organizations to support more capacity per storage administrator or spend more time focused on delivering future innovations to the business

Flexibility

Flexibility is one of the core tenants of SDS, as the increased ability to shift storage across locations and hardware leads to its agility and cost benefits.

But flexibility cannot be obtained without true interoperability. It’s possible that your new SDS provider could have a long-term roadmap towards standardizing other components of your IT infrastructure on the same vendor, which limits many of the benefits of SDS in the first place.

In order to achieve maximum flexibility with your SDS project, make sure that you evaluate solutions that play well with others. Examine a vendor’s alliances and alternate IT solutions, and evaluate whether open source or proprietary plays into this alignment.

Decoupling Hardware from Software

Software-defined storage (SDS) is an approach to data storage in which the programming that controls storage-related tasks is decoupled from the physical storage hardware. Software-defined storage is part of a larger industry trend that includes software-defined networking (SDN) and software-defined data centers (SDDC).

Software-defined storage puts the emphasis on storage services such as deduplication or replication, instead of storage hardware. Without the constraints of a physical system, a storage resource can be used more efficiently and its administration can be simplified through automated policy-based management. For example, a storage administrator can use service levels when deciding how to provision storage and not even have to think about hardware attributes. Storage can, in effect, becomes a shared pool that runs on commodity hardware.

Software-defined storage is part of a larger industry trend that includes software-defined networking (SDN) and software-defined data centers (SDDC). As is the case with SDN, software-defined storage enables flexible management at a much more granular level through programming.

Reducing Costs

The shift towards SDS is largely driven by a substantial reduction in cost without compromising (even improving) a previously commoditized technology. By the way it’s consumed and managed, your capital storage expenses by way of hardware into bills that you pay as you use. With future storage demands growing and unpredictable, having technology that’s handed as an operating expense can lead to a huge reduction in costs. SDS also saves on any hardware maintenance and support costs, as storage resources are no longer tied to hardware.

However, cost advantages don’t start and end by the shift to SDS alone. There are various solutions in the market that maximize a potential IT investment. For example, the solutions available within the open source community allow data center managers with additional cost savings, compared to their proprietary counterparts.

There’s no question that the storage industry is in the middle of an inflection point in its adoption. With nearly overwhelming advantages in cost, flexibility and performance, SDS is a solution any storage-conscious data center manager will evaluate in the next several years.

But adopting SDS alone isn’t enough. Instead, make sure that you know the goals for your storage project, and evaluate vendors that align with your IT objectives. SDS is the future – make sure that you’re on its leading edge.

 

Matador Deploy – Making Deployment on Rancher Fun

Tuesday, 26 July, 2016

By Timon Sotiropoulos, software engineer at
SEED. SEED is a leading product development
company that builds design-driven web and mobile applications for
startup founders and enterprise innovators.
Seed
LogoDeployment
days can be quite confronting and scary for new developers. We realized
through onboarding some of our developers and introducing them to the
world of DevOps that the complexity and stress of deployment days could
take a toll on morale and productivity, with everyone always half
dreading a deployment on the upcoming calendar. After learning the no.1
rule of “never deploy on a Friday” the hard way, the team at SEED
decided there had to be a better way than the traditional “pull down
from Git repository and deploy to a server” method. The Road to
Matador
This journey started with a trip down the lane of the hottest
containerisation framework in the business, the flying whale Docker. For
those have haven’t heard of it, Docker essentially allows you to create
a blueprint for your application inside its own contained virtual
machine image. What this means is that you can create a working version
of your app on any server that has Docker installed and be confident
that everything will work as expected. The next link in the chain we
discovered was Rancher, an excellent tool for
automatically connecting and configuring Docker containers. Rancher
allows you to break your application up into multiple, separate
components the same way you would break up a program into different
classes, allowing single responsibility as well as the ability to scale
certain services up and down as required. This process and procedure
became second nature to us, but it was easy to mess things up. It was
easy to accidentally update the wrong Rancher environment and as we
planned on moving to a more continuous development lifecycle, the manual
updating of the Rancher environments had to stop. Our long term plan for
our continuous deployment process is to get to a point where a developer
can push their code to GitHub, build a copy of the Docker container, tag
it with that commit ID, and then push that code to their desired Rancher
environment. All the separate parts work independently, but we are
working towards integrating all of these tools into a fully-fledged
continuous deployment service. The first step is Matador Deploy. Matador
is a tool we have created to handle the creation and building of our
Docker containers and deploying them to the Rancher
environments
. The complication here is
that for each of our projects, we would have two or three separate
environments, one each for Production, Staging and Development. To do
this, we would have to duplicate all of our DevOps configurations and
scripts for each of our environments and then build the application
using a Makefile that set specific variables for each of the Rancher
Compose commands. However, we found that these Makefiles were simply
starting to duplicate themselves across all of our projects and we knew
there had to be a better way.

So what does Matador do?

The first thing we wanted Matador to do was combine the similar parts of
our environments and Docker/Rancher configurations into one file, while
still also allowing us to have the environment-specific parts when
required, such as the environment variables that connected to our
production or staging database. This led to the creation of three files
that Matador requires to run: two generic templates that setup the
basics of the project, and one configuration file that holds all our
environment specific configuration:

  • docker-compose-template.yml: The generic Docker Compose file for
    the application. This file contains all the configuration that
    builds the docker containers that together create your application
    stack, as well as the connections between them.
  • rancher-compose-template.yml: The generic Rancher Compose file
    for the application. This file contains all the configuration that
    is specific to your Rancher environments, such as the scale for each
    of your docker containers or your SSL certificates that have been
    setup for the Rancher environment.
  • config.yml: The config file is the one where you can define your
    environment specific configuration between your production, staging
    and development environments that have been set up on Rancher. Below
    is a short example of how this config file should be structured out:

image_base: seed/example-image
project_name: tester
global:
 web:
  environment:
   - TEST=forall
dev:
 web:
  environment:
   - NODE_ENV=dev
  labels:
   io.rancher.scheduler.affinity:host_label: client=ibackpacker,env=development
staging:
 web:
  environment:
   - NODE_ENV=staging
  labels:
   # io.rancher.scheduler.affinity:host_label: client=alessi,env=staging
   com.alessimutants.pods: version=0.1,branch=dev
prod:
 lb:
  labels:
   io.rancher.scheduler.local: 'false'
  web:
   image: seed/web
   environment:
    - NODE_ENV=prod
   labels:
    io.rancher.scheduler.local: 'false'

Everything defined in the config.yml file will be added to your
docker-compose-template depending on the environment variable that you
pass the application at run time. Matador will take the additional
config provided, then append or overwrite what is in the
docker-compose-template file and write out a new docker-compose file for
you automatically. The same is done with your rancher-compose-template;
although at this point in time there are no configuration options to
alter the template, this will be added in future releases. These output
files are then used as part of the Rancher Compose process to update
your environment on Rancher. They are also saved locally so that you can
review the configuration that Matador has created for you.

So How Do I Use Matador?

We have put together some extremely detailed usage instructions on the
GitHub repository, but the
general gist is pretty straight forward. You will need to download the
latest version of Matador Deploy from the Python Package Index –
PyPI
, as well as Rancher
Compose
, which can be
downloaded from their release page on GitHub. Once that is done, there
are a few required fields that you must supply to the configuration file
to make things work. These are the first two entries in the config.yml:

  • project_name: This field will be the name that your stack receives
    when it is deployed to Rancher. It will also be automatically
    namespaced with the environment that you pass to Matador when you
    deploy. Note, this is not the Rancher environment name, but rather
    the Rancher stack name.
  • image_base: This field is the most important because it provides
    the DockerHub registry that your application will attempt to load
    your docker images from. These also have a naming convention that is
    required for each of your respective environment images as follows:

seed/example-image:latest // Production Image seed/example-image:staging
// Staging Image seed/example-image:dev // Development Image

We do plan to include the building of your Docker images within Matador
itself in future releases, however for now you will need to add these
tags manually when pushing your images to DockerHub. Once your
config.yml, docker-compose-template.yml and rancher-compose-template.yml
have been configured, place them inside a folder called “templates” in
the root of your project directory. Finally, from the root of your
project directory call the following command:

$ matador-deploy –url http://rancher.url.co –key RANCHER_KEY –secret RANCHER_SECRET –env dev

The fields themselves are explained here:

–url: This refers to the rancher url that you are trying to upload
your rancher configuration to. –key: This is the API Key that needs to
be created specifically for the rancher environment that you are trying
to update. –secret: This is the Secret Key of Password that is
provided to you when you create a new API Key for your rancher
environment. –env: This is the environment that you wish to update. It
takes one of the following options are

The benefit of Matador in this instance is that it forces you to provide
the authentication information for your Rancher environment. One of the
issues with Rancher Compose is that it will search your local
environment in your shell for the Rancher environment keys, so if you
are pushing a lot of different stacks to Rancher (for example pushing to
Staging, then to Production), it can be easy to make a mistake and push
the wrong image to the wrong environment. If these fields aren’t
provided to Matador, the process will simply fail. There are also plans
to improve this even further by querying your Rancher server with your
API keys and having Matador actually tell you what environment it is
attempting to update – look for that too in a future release!

Where To From Here?

We have a few ideas of things we want the application to be able to do
as we work our way into building a full continuous deployment tool. A
few basic examples would be: ● Adding Support for building docker images
and pushing them to Docker Hub ● Adding a tagging system that connects
your Docker Hub images to your currently loaded Image on your Rancher
environment ● Add a simplified rollback option, most likely using the
tagging system However, what we really want to know are the features
that you would find most useful. We have open sourced Matador because we
think that it could be really helpful in integrating all these excellent
services together in the future. So please give it a try, and if you
have any ideas either write an issue and we will have a look into it, or
just fork the repository and give it a go. We can’t wait to see what you
come up with.

Tags: Category: Uncategorized Comments closed