Two Dot Awesome

Wednesday, 25 October, 2017

Rancher 2.0 is coming, and it’s amazing.

In the Beginning…

When Rancher released 1.0 in early 2016, the container landscape looked
completely different. Kubernetes wasn’t the powerhouse that it is today.
Swarm and Mesos satisfied specific use cases, and the bulk of the
community still used Docker and Docker Compose with tools like Ansible,
Puppet, or Chef. It was still BYOLB (bring your own load balancer), and
volume management was another manual nightmare. Rancher stepped in with
Cattle, and with it we augmented Docker with overlay networking,
multi-cloud environments, health checking, load balancing, storage
volume drivers, scheduling, and other features, while keeping the format
of Docker Compose for configuration. We delivered an API, command-line
tools, and a user interface that made launching services simple and
intuitive. That’s key: simple and intuitive. With these two things, we
abstracted the complexities of disparate systems and offered a way for
businesses to run container workloads without having to manage the
technology required to do so. We also gave the community the ability to
run Swarm, Kubernetes, or Mesos, but we drew the line at managing the
infrastructure components and stepped back, giving operators the ability
to do whatever they wanted within each of those systems. “Here’s
Kubernetes,” we said. “We’ll keep the lights on but, beyond that, using
Kubernetes is up to you. Have fun!” If you compress the next 16 months
into a few thoughts, looking only at our user base, we can say that
Kubernetes adoption has grown dramatically, while Mesos and Swarm
adoption has fallen. The functionality of Kubernetes has caught up with
the functionality of Cattle and, in some areas, has surpassed it as
vendors develop Kubernetes integrations that they aren’t developing
directly for Docker. Many of the features in Cattle have analogs in
Kubernetes, such as label-based selection for scheduling and load
balancing, resource limits for services, collecting containers into
groups that share the same network space, and more. If we take a few
steps back and look at it objectively, one might say that by developing
Cattle-specific services, we’re essentially developing a clone of
Kubernetes at a slower pace than Kubernetes themselves. Rancher
2.0
changes that.

The Engine Does Not Matter

First, let me be totally clear: our beloved Cattle is not going
anywhere, nor is RancherOS or Longhorn. If you get into your car and
drive somewhere, what matters is that you get there. Some of you might
care about the model of your car or its top speed, but most people just
care about getting to the destination. Few people care about the engine
or its specifics. We only look under the hood when something is going
wrong. The engine for Cattle in Rancher 1.x was Docker and Docker
Compose. In Rancher 2.x, the engine is Kubernetes, but it doesn’t
matter. In Rancher 1.x, you can go to the UI or the API and deploy
environments with an overlay network, bring up stacks and services,
import docker-compose.yml files, add load balancers, deploy items from
the Catalog, and more. In Rancher 2.x, guess what you can do? You can do
the exact same things, in the exact same way. Sure, we’ve improved the
UI and changed the names of some items, but the core functionality is
the same. We’re moving away from using the term Cattle, because now
Cattle is no different from Kubernetes in practice. It might be
confusing at first, but I assure you that a rose by any other name still
smells as sweet. If you’re someone who doesn’t care about Kubernetes,
then you can continue not caring about it. In Rancher 1.x, we deployed
Kubernetes into an environment as an add-on to Rancher. In 2.x, we
integrated Kubernetes with the Rancher server. It’s transparent, and
unless you go looking for it, you’ll never see it. What you will see
are features that didn’t exist in 1.x and that, frankly, we couldn’t
easily build on top of Docker because it doesn’t support them. Let’s
talk about those things, so you can be excited about what’s coming.

The Goodies

Here is a small list of the things that you can do with Rancher 2.x
without even knowing that Kubernetes exists.

Storage Volume Drivers

In Rancher 1.x, you were limited to named and anonymous Docker volumes,
bind-mounted volumes, EBS, NFS, and some vendor-specific storage
solutions (EMC, NetApp, etc.). In Rancher 2.x, you can leverage any
storage volume driver supported by Kubernetes. Out of the box, this
brings NFS, EBS, GCE, Glusterfs, vSphere, Cinder, Ceph, Azure Disk,
Azure File, Portworx, and more. As other vendors develop storage drivers
for Kubernetes, they will be immediately available within Rancher 2.x.

Host Multitenancy

In Rancher 1.x, an environment was a collection of hosts. No host could
exist in more than one environment, and this delineation wasn’t always
appropriate. In Rancher 2.x, we have a cluster, which is a collection of
hosts and, within that cluster, you can have an infinite number of
environments that span those hosts. Each environment comes with its own
role-based access control (RBAC), for granular control over who can
execute actions in each environment. Now you can reduce your footprint
of hosts and consolidate resources within environments.

Single-Serving Containers

In Rancher 1.x, you had to deploy everything within a stack, even if it
was a single service with one container. In Rancher 2.x, the smallest
unit of deployment is a container, and you can deploy containers
individually if you wish. You can promote them into services within a
common stack or within their own stacks, or you can promote them to
global services, deployed on every host.

Afterthought Sidekicks

In Rancher 1.x, you had to define sidekicks at the time that you
launched the service. In Rancher 2.x, you can add sidekicks later and
attach them to any service.

Rapid Rollout of New Technology

When new technology like Istio or linkerd hits the community, we want to
support it as quickly as possible. In Rancher 1.x, there were times
where it was technologically impossible to support items because we were
built on top of Docker. By rebasing onto Kubernetes, we can quickly
deploy support for new technology and deliver on our promise of allowing
users to get right to work using technology without needing to do the
heavy lifting of installing and maintaining the solutions themselves.

Out-of-the-Box Metrics

In Rancher 1.x, you had to figure out how to monitor your services. We
have some monitoring extracted from Docker statistics, but it’s a
challenge to get those metrics out of Rancher and into something else.
Rancher 2.x ships with Heapster, InfluxDB, and Grafana, and these
provide per-node and per-pod metrics that are valuable for understanding
what’s going on in your environment. There are enhancements that you can
plug into these tools, like Prometheus and Elasticsearch, and those
enhancements have templates that make installation fast and easy.

Broader Catalog Support

The Catalog is one of the most popular items in Rancher, and it grows
with new offerings on a weekly basis. Kubernetes has its own
catalog-like service called Helm. In Rancher 1.x, if something wasn’t in
the Catalog, you had to build it yourself. In Rancher 2.x, we will
support our own Catalog, private catalogs, or Helm, giving you a greater
pool of pre-configured applications from which to choose.

We Still Support Compose

The option to import configuration from Docker Compose still exists.
This makes migrating into Rancher 2.x as easy as ever, either from a
Rancher 1.x environment or from a standalone Docker/Compose setup.

Phased Migration into Kubernetes

If you’re a community member who is interested in Kubernetes but has
shied away from it because of the learning curve, Rancher 2.x gives you
the ability to continue doing what you’re doing with Cattle and, at your
own pace, look at and understand how that translates to Kubernetes. You
can begin deploying Kubernetes resources directly when you’re ready.

What’s New for the Kubernetes Crowd?

If you’re part of our Kubernetes user base, or if you’re a Kubernetes
user who hasn’t yet taken Rancher for a spin, we have some surprises for
you as well.

Import Existing Kubernetes Clusters

This is one of the biggest new features in Rancher 2.x. If you like the
Rancher UI but already have Kubernetes clusters deployed elsewhere, you
can now import those clusters, as-is, into Rancher’s control and begin
to manage them and interact with them via our UI and API. This feature
is great for seamlessly migrating into Rancher, or for consolidating
management of disparate clusters across your business under a single
pane of glass.

Instant HA

If you deploy the Rancher server in High Availability (HA) mode, you
instantly get HA for Kubernetes.

Full Kubernetes Access

In Rancher 1.x, you could only interact with Kubernetes via the means
that Kubernetes allows — kubectl or the Dashboard. We were
hands-off. In Rancher 2.x, you can interact with your Kubernetes
clusters vi the UI or API, or you can click the Advanced button,
grab the configuration for kubectl, and interact with them via that
means. The Kubernetes Dashboard is also available, secured behind
Rancher’s RBAC.

Compose Translation

Do you want to set up a deployment from a README that includes a sample
Compose file? In Rancher 2.x, you can take that Compose file and apply
it, and we’ll convert it into Kubernetes resources. This conversion
isn’t just a 1:1 translation of Compose directives; this is us
understanding the intended output of the Compose file and creating that
within Kubernetes.

It Really is This Awesome

I’ve been using Docker in production since 2013 and, during that time,
I’ve moved from straight Docker commands to an in-house deployment
utility that I wrote, to Docker Compose configs managed by Ansible, and
then to Rancher. Each of those stages in my progression were defined by
one requirement: the need to do more things faster and in a way that
could be automated. Rancher allows me to do 100x more than I could do
myself or with Compose, and removes the need for me to manage those
components. Over the year that I’ve been using Rancher, I’ve seen it
grow with one goal in mind: making things easy. Rancher 2.x steps up the
delivery of that goal with accomplishments that are amazing. Cattle
users still have the Cattle experience. Kubernetes users have greater
access to Kubernetes. Everyone has access to all the amazing work being
done by the community. Rancher still makes things easy and still
manages the
infrastructure

so that you can get right to deploying containers and getting work done.
I cannot wait to see where we go next.

About the Author

Adrian Goins is a
field engineer for Rancher Labs who resides in Chile and likes to put
out fires in his free time. He loves Rancher so much that he can’t
contain himself.

Tags: ,,, Category: Rancher Blog Comments closed

DockerCon EU Impressions

Friday, 20 October, 2017

I just came back from DockerCon EU. I have not met a more friendly and
helpful group of people than the users, vendors, and Docker employees at
DockerCon. It was a well-organized event and a fun
experience.

I went into the event with some
questions[
about where Docker was headed. Solomon Hykes addressed these questions
in his keynote, which was the highlight of the entire show. Docker
embracing Kubernetes is clearly the single biggest piece of news coming
out of DockerCon.

If there’s one thing
Docker wanted the attendees to take away, it was the Modernize
Traditional Applications (MTA) program. The idea of MTA is simple:
package a traditional Windows or Linux app as a Docker container then
deploy the app on modern cloud infrastructure and achieve some savings.
By dedicating half of the day one keynote and the entire day two keynote
to this topic, Docker seems to have bet its entire business on this
single value proposition.

I am surprised, however, that MTA became the sole business case focus at DockerCon. The
DockerCon attendees I talked to expected Docker to outline a more
complete vision of business opportunities for Docker. MTA did not appeal
to majority of DockerCon attendees. Even enterprise customers I met had
much bigger plans than MTA. I wish Docker had spent some time
reinforcing the value containers can deliver in transforming application
development, which is a much bigger business
opportunity.

MTA builds on the most basic capabilities of Docker as an application packaging format, a practice
that has existed since the very beginning of Docker. But what specific
features of Docker EE makes MTA work better than before? Why is Docker
as a company uniquely positioned to offer a solution for MTA? What other
tools will customers need to complete the MTA journey? The MTA keynotes
left these and many other questions unanswered.

Beyond supporting Kubernetes, Docker made
no announcements that made Swarm more likely to stay relevant. As an
ecosystem partner, I find it increasingly difficult to innovate based on
Docker’s suite of technologies. I miss the days when Docker announced
great innovations like Docker Machine, Docker Swarm, Docker Compose,
Docker network and volume plugins, and all kinds of security-related
innovations. We all used to get busy working on these technologies the
very next day. There are still plenty of innovations in container
technologies today, but the innovations are happening in the Kubernetes
and CNCF ecosystem.

After integrating Kubernetes, I hope Docker can get back to producing more innovative
technologies. I have not seen many companies who possess as much
capacity to innovate and attention to usability as Docker. I look
forward to what Docker will announce at the next
DockerCon
.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Installing Rancher – From Single Container to High Availability

Thursday, 7 September, 2017

Update: This tutorial was updated for Rancher 2.x in 2019 here

Any time an organization, team or developer adopts a new platform, there
are certain challenges during the setup and configuration process. Often
installations have to be restarted from scratch and workloads are lost.
This leaves adopters apprehensive about moving forward with new
technologies. The cost, risk and effort are too great in the business of
today. With Rancher, we’ve established a clear container installation and upgrade
path so no work is thrown away. Facilitating a smooth upgrade path is
key to mitigating against risk and increasing costs. This guide has two
goals:

  1. Take you through the installation and upgrade process from a
    technical perspective.
  2. Inform you of the different types of installations and their
    purpose.

With that in mind, we’re going to walk through the set-up of Rancher
Server in each of the following scenarios, with each step upgrading from
the previous one:

  • Single Container (non-HA) – installation
  • Single Container (non-HA)- Bind mounted MySQL volume
  • Single Container (non-HA) – External database
  • Full Active/Active HA – (upgrading to this from our previous set up)

A working knowledge of Docker is assumed. For this guide, you’ll need
one or two Linux virtual machines with the Docker engine installed and
an available MySQL database server. All virtual machines need to be able
to talk to each other, so be mindful of any restrictions you have in a
cloud environment (AWS, GCP, Digital Ocean etc.). Detailed
documentation is located
here
.

**Single Container (non-HA) – Installation

Container With a Text Above That Says 'Rancher Server'**

  1. SSH into your Linux virtual machine
  2. Verify your Docker
    installation with docker -v. You should see something resembling Docker
    version 1.12.x
  3. Run sudo docker run -d –restart=unless-stopped -p
    8080:8080 rancher/server
  4. Docker will pull the rancher/server
    container image and run it on port 8080
  5. Run docker ps -a. You should
    see an output similar to this:
    Output After Entering 'Run docker ps-a' Command
    (Note: remember the name or ID of the rancher/server container)
  6. At this point, you should be able to go to http://<server_ip>:8080 in
    your browser and see the Rancher UI.

You should see the Rancher UI with the welcome modal: Rancher UI
welcome Since this is our initial set up, we need to add a host to our Rancher environment:

A Detailed Overview of Rancher’s Architecture


This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Get the eBook

  1. Click ‘Got it!’
  2. Then click ‘Add a Host’. This first time you’ll see
    a Host Registration URL page: Add
host
  3. For this
    article, we’ll just go with whatever IP address we have. Click ‘Save’.
  4. Now, click ‘Add a Host’ again. You’ll see this: Add host cloud
provider (note: the ports that have to been open for hosts to be able to communicate are 500 and
    4500.) From here you can decide how you want to add your hosts based on
    your infrastructure.
  5. After adding your host(s) you should see something like this:
    Rancher host details

So, what’s going on here? The rancher-agent-bootstrap container runs once to get the rancher-agent up and running then stops (notice the red circle indicating a stopped container). As we can see above, the health check container is starting up. Once all infrastructure containers are up and running on the host you’ll see this:

Host wit infrastructure
container

Here we see all infrastructure containers (health check, scheduler, metadata, network
manager, IPsec, and cni-driver) are all up and running on the host.

Tip:
to view only user containers, uncheck ‘Show System’ in the top right
corner of the Host view. Congratulations! You’ve set up a Rancher
Server in a single container. Rancher is up and running and has a local
MySQL database running inside of the container. You can add items from
the catalog
, deploy
your own containers etc. As long as you don’t delete the rancher/server
container, any changes you make to the environment will be preserved as
we go to our next step.

**Single Container (non-HA) – Bind-mounted volume

**

Now we’re going to take our existing Rancher server and upgrade it to
use a bind-mounted volume for the database. This way, should the
container die when we upgrade to a later version of Rancher, we don’t
lose the data for what we’ve built. In our next steps, we’re going to
stop the rancher-server container, externalize the data to the host,
then start a new instance of the container using the bind-mounted
volume. Detailed documentation is located
here
.

  1. Let’s say our rancher/server container is named fantastic_turtle.
  2. Run docker stop fantastic_turtle.
  3. Run docker cp fantastic_turtle:/var/lib/mysql <path on host> (Any
    path will do but using /opt/docker or something similar is not
    recommended). I use /data as it’s usually empty. This will copy the
    database files out of the container to the file system to /data. The
    export will put your database files at /data/mysql.
  4. Verify the location by running ls -al /data You will see
    a mysql directory within the path.
  5. Run sudo chown -R 102:105 /data. This will allow the mysql user
    within the container to access the files.
  6. Run docker run -d -v /data/mysql:/var/lib/mysql -p 8080:8080
    –restart=unless-stopped rancher/server:stable. Give it about 60
    seconds to start up.
  7. Open the Rancher UI at http://<server_ip>:8080. You should see
    the UI exactly as you left it. You’ll also notice your workloads
    that you were running have continued to run.
  8. Let’s clean up the environment a bit. Run docker ps -a.
  9. You’ll see 2 rancher/server Image containers. One will have a
    status of Exited (0) X minutes ago and one will have a status of Up
    X minutes. Copy the name of the container with exited status.
  10. Run docker rm fantastic_turtle.
  11. Now our docker environment is clean with Rancher server running with
    the new container.

**Single Container (non-HA) – External database

**

As we head toward an HA set up, we need have Rancher server running with
an external database. Currently, if anything happens to our host, we
could lose the data supporting the Rancher workloads. We’re going to
launch our Rancher server with an external database. We don’t want to
disturb our current set up or workloads so we’ll have to export our
data, import into a proper MySQL or MySQL compliant database and restart
our Rancher server that points to our external database with our data in
it.

  1. SSH into our Rancher server host.
  2. Run docker exec -it
    <container name> bash. This will give you a terminal session in
    your rancher/server container.
  3. Run mysql -u root -p.
  4. When prompted
    for a password, press [Enter].
  5. You now have a mysql prompt.
    6.Run show databases. You’ll see this:

    This way we know we have the rancher/server database.
  6. Run exit.
  7. Run mysqldump -u root -p cattle > /var/lib/mysql/rancher-backup.sql
    When prompted for a password hit [Enter].
  8. Exit the container.
    10.Run ls -al /data/mysql. You’ll see your rancher-backup.sql in the
    directory. We’ve exported the database! At this point, we can move the
    data to any MySQL compliant database running in our infrastructure as
    long as our rancher/server host can reach the MySQL database host. Also,
    keep in mind all this while your workloads that you have been running on
    the Rancher server and hosts are fine. Feel free to use them. We haven’t
    stopped the server yet, so of course they’re fine.
  9. Move
    your rancher-backup.sqlto a target host running a MySQL database server.
  10. Open a mysql session with your MySQL database server. Run mysql -u
    <user> -p.
  11. Enter your decided or provided password.
  12. 14. Run CREATE
    DATABASE IF NOT EXISTS cattle COLLATE = ‘utf8_general_ci’ CHARACTER
    SET = ‘utf8’;
  13. Run GRANT ALL ON cattle.* TO ‘cattle’@‘%’
    IDENTIFIED BY ‘cattle’; This creates our cattle user for
    the cattle database using the cattle password. (note: use a strong
    password for production)
  14. Run GRANT ALL ON cattle.* TO
    ‘cattle’@‘localhost’ IDENTIFIED BY ‘cattle’; This will allow us to
    run queries from the MySQL database host.
  15. Find where you put your rancher-backup.sql file on the MySQL database host. From there, run mysql -u cattle -p cattle < rancher-backup.sql This says “hey mysql, using the cattle user import this file into the cattle
    database“. You can also use root if you prefer.
  16. Let’s verify the
    import. Run mysql -u cattle -p to get a mysql session.
  17. Once in, run use cattle; Then show tables; You should see something like this:

Now we’re ready to bring up our Rancher server talking to our external

database.

  1. Log into the host where Rancher server is running.
  2. Run docker ps -a. Again, we see our rancher/server container is
    running:
  3. Let’s stop our rancher/server. Again, our workloads will continue
    to run. Run docker stop <container name>’.
  4. Now let’s bring it up using our external database. Run docker run
    -d –restart=unless-stopped -p 8080:8080 rancher/server –db-host
    <mysql host> –db-port 3306 –db-user cattle –db-pass cattle
    –db-name cattle. Give it about 60+ seconds for
    the rancher/server container to run.
  5. Now open the Rancher UI at http://<server_ip>:8080.

Congrats! You’re now running Rancher server with an external database
and your workloads are preserved.

**Rancher Server – Full Active/Active HA

**

Now it’s time to configure our Rancher server for High Availability.
Running Rancher server in High Availability (HA) is as easy as running
Rancher server using an external database, exposing an additional port,
and adding in an additional argument to the command so that the servers
can find each other.
1. Be sure that port 9345 is open between the
Rancher server host and any other hosts we want to add to the cluster.
Also, be sure port 3306 is open between any Rancher server and the MySQL
server host.
2. Run docker stop <container name>.
3. Run docker run -d
–restart=unless-stopped -p 8080:8080 -p 9345:9345 rancher/server
–db-host <mysql host> –db-port 3306 –db-user cattle –db-pass
cattle –db-name cattle –advertise-address <IP_of_the_Node>
(*note: Cloud provider users should use the internal/private IP
address). Give it 60+ seconds for the container to run. (note: if after
75 seconds you can’t view the Rancher UI, see the troubleshooting
section below)
4. Open the Rancher UI at http://<server_ip>:8080.
You’ll see all your workloads and settings as you left them.
5. Click
on Admin then High Availability. You should see your single host you’ve
added. Let’s add another node to the cluster.
6. On another host, run
the same command but replacing the –advertise-address
<IP_of_the_Node> with the IP address of the new host you’re adding
to the cluster. Give it 60+ seconds. Refresh your Rancher server UI.
7.
Click on Admin then High Availability. You should see both nodes have
been added to your cluster. HA
setup
8. Because we
recommend an odd number of Rancher server nodes, add either 1 or 3 more
nodes to the cluster using the same method. Congrats! You have a Rancher
server cluster configured for High Availability.

Troubleshooting & Tips

During my time walking through these steps myself I ran into a few
issues. Below are some you might run into and how to deal with them.
Issue: Can’t view the Rancher UI after 75 seconds.
1. SSH into the
Rancher server host.
2. Confirm rancher/server is running. Run docker ps
–a. Given an output like this:
Output After Entering 'Run docker ps-a' Command
3. To view logs, run
`docker logs –t tender_bassi` (in this case). If you see something
like this: RANCHER BLOG
3 It’s Rancher being unable to reach the database server or authenticate with the credentials we’ve provided it in our start up command. Take a look at networking settings, username and password and access privileges in the MySQL
server.

Tip: While you may be tempted to name your rancher/server
‘—name=rancher-server’ or something like it this is not recommended.
The reason for this is if you need to rollback to your prior container
version after an upgrade step, you’ll have clear distinction between
container versions.

Conclusion

So, what have we done? We’ve installed Rancher server as a single
container. We’ve upgraded the Rancher installation to a high
availability platform instance without impacting running workloads.
We’ve also established guidelines for different types of environments.
We hope this was helpful. Further details on upgrading are available
here https://rancher.com/docs/rancher/v1.6/en/upgrading/.

Tags: Category: Uncategorized Comments closed

Microservices Made Easier Using Istio

Thursday, 24 August, 2017

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

Update: This tutorial on Istio was updated for Rancher 2.0 here.

One of the recent open source initiatives that has caught our interest
at Rancher Labs is Istio, the micro-services
development framework. It’s a great technology, combining some of the
latest ideas in distributed services architecture in an easy-to-use
abstraction. Istio does several things for you. Sometimes referred to as
a “service mesh“, it has facilities for API
authentication/authorization, service routing, service discovery,
request monitoring, request rate-limiting, and more. It’s made up of a
few modular components that can be consumed separately or as a whole.
Some of the concepts such as “circuit breakers” are so sensible I
wonder how we ever got by without them.

Circuit breakers
are a solution to the problem where a service fails and incoming
requests cannot be handled. This causes the dependent services making
those calls to exhaust all their connections/resources, either waiting
for connections to timeout or allocating memory/threads to create new
ones. The circuit breaker protects the dependent services by
“tripping” when there are too many failures in a some interval of
time, and then only after some cool-down period, allowing some
connections to retry (effectively testing the waters to see if the
upstream service is ready to handle normal traffic again).

Istio is
built with Kubernetes in mind. Kubernetes is a
great foundation as it’s one of the fastest growing platforms for
running container systems, and has extensive community support as well
as a wide variety of tools. Kubernetes is also built for scale, giving
you a foundation that can grow with your application.

Deploying Istio with Helm

Rancher includes and enterprise Kubernetes distribution makes it easy to
run Istio. First, fire up a Kubernetes environment on Rancher (watch
this
demo
or see our quickstart
guide
for
help). Next, use the helm chart from the Kubernetes Incubator for
deploying Istio to start the framework’s components. You’ll need to
install helm, which you can do by following this
guide
.
Once you have helm installed, you can add the helm chart repo from
Google to your helm client:

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

Then you can simply run:

helm install -n istio incubator/istio


A view in kube dash of the microservices that makeup Istio
This will deploy a few micro-services that provide the functionality of
Istio. Istio gives you a framework for exchanging messages between
services. The advantage of using it over building your own is you don’t
have to implement as much “boiler-plate” code before actually writing
the business logic of your application. For instance, do you need to
implement auth or ACLs between services? It’s quite possible that your
needs are the same as most other developers trying to do the same, and
Istio offers a well-written solution that just works. Its also has a
community of developers whose focus is to make this one thing work
really well, and as you build your application around this framework, it
will continue to benefit from this innovation with minimal effort on
your part.

Deploying an Istio Application

OK, so lets try this thing out. So far all we have is plumbing. To
actually see it do something you’ll want to deploy an Istio
application. The Istio team have put together a nice sample application
they call ”BookInfo” to
demonstrate how it works. To work with Istio applications we’ll need
two things: the Istio command line client, istioctl, and the Istio
application templates. The istioctl client works in conjunction with
kubectl to deploy Istio applications. In this basic example,
istioctl serves as a preprocessor for kubectl, so we can dynamically
inject information that is particular to our Istio deployment.
Therefore, in many ways, you are working with normal Kubernetes resource
YAML files, just with some hooks where special Istio stuff can be
injected. To make it easier to get started, you can get both istioctl
and the needed application templates from this repo:
https://github.com/wjimenez5271/rancher-istio. Just clone it on your
local machine. This also assumes you have kubectl installed and
configured. If you need help installing that see our
docs.
Now
that you’ve cloned the above repo, “cd” into the directory and run:

kubectl apply -f <(istioctl kube-inject -f samples/apps/bookinfo/bookinfo.yaml)

This deploys the kubernetes resources using kubectl while injecting some
istio specific values. It will deploy new services to K8 that will serve
the “BookInfo” application, but it will leverage the Istio services
we’ve already deployed. Once the BookInfo services finish deploying we
should be able to view the UI of the web app. We’ll need to get the
address first, we can do that by running

kubectl get services istio-ingress -o wide

This should show you the IP address of the istio ingress (under the
EXTERNAL-IP column). We’ll use this IP address to construct the URL to
access the application. For example, my output with my local Rancher
install looks like:
Example output of kubectl get services istio-ingress -o wide
The istio ingress is shared amongst your applications, and routes to the
correct service based on a URI pattern. Our application route is at
/productpage so our request URL would be:

http://$EXTERNAL_IP/productpage

Try loading that in your browser. If everything worked you should see
a page like this:
Sample application “BookInfo“, built on Istio

Built-in metrics system

Now that we’ve got our application working we can check out the built
in metrics system to see how its behaving. As you can see, Istio has
instrumented our transactions automatically just by using their
framework. Its using the Prometheus metrics collection engine, but they
set it up for you out of the box. We can visualize the metrics using
Grafana. Using the helm chart in this article, accessing the endpoint of
the Grafana pod will require setting up a local kubectl port forward
rule:

export POD_NAME=$(kubectl get pods --namespace default -l "component=istio-istio-grafana" -o jsonpath="{.items[0].metadata.name}")

kubectl port-forward $POD_NAME 3000:3000 --namespace default

You can then access Grafana at:
http://127.0.0.1:3000/dashboard/db/istio-dashboard
The Grafana Dashboard with the included Istio template that highlights
useful metrics Have you developed something cool with Istio
on Rancher? If so, we’d love to hear about it. Feel free to drop us a
line on twitter @Rancher_Labs, or
on our user slack.

Expert Training in Kubernetes and Rancher
Join our free online training sessions to learn more about Kubernetes, containers, and Rancher.

Moving Your Monolith: Best Practices and Focus Areas

Monday, 26 June, 2017

You have a complex monolithic system that is critical to your business.
You’ve read articles and would love to move it to a more modern platform
using microservices and containers, but you have no idea where to start.
If that sounds like your situation, then this is the article for you.
Below, I identify best practices and the areas to focus on as you evolve
your monolithic application into a microservices-oriented application.

Overview

We all know that net new, greenfield development is ideal, starting with
a container-based approach using cloud services. Unfortunately, that is
not the day-to-day reality inside most development teams. Most
development teams support multiple existing applications that have been
around for a few years and need to be refactored to take advantage of
modern toolsets and platforms. This is often referred to as brownfield
development. Not all application technology will fit into containers
easily. It can always be made to fit, but one has to question if it is
worth it. For example, you could lift and shift an entire large-scale
application into containers or onto a cloud platform, but you will
realize none of the benefits around flexibility or cost containment.

Document All Components Currently in Use

Our
newly-updated eBook walks you through incorporating containers into your
CI/CD pipeline. Download the
eBook

Taking an assessment of the current state of the application and its
underpinning stack may not sound like a revolutionary idea, but when
done holistically, including all the network and infrastructure
components, there will often be easy wins that are identified as part of
this stage. Small, incremental steps are the best way to make your
stakeholders and support teams more comfortable with containers without
going straight for the core of the application. Examples of
infrastructure components that are container-friendly are web servers
(ex: Apache HTTPD), reverse proxy and load balancers (ex: haproxy),
caching components (ex: memcached), and even queue managers (ex: IBM
MQ). Say you want to go to the extreme: if the application is written in
Java, could a more lightweight Java EE container be used that supports
running inside Docker without having to break apart the application
right away? WebLogic, JBoss (Wildfly), and WebSphere Liberty are great
examples of Docker-friendly Java EE containers.

Identify Existing Application Components

Now that the “easy” wins at the infrastructure layer are running in
containers, it is time to start looking inside the application to find
the logical breakdown of components. For example, can the user interface
be segmented out as a separate, deployable application? Can part of the
UI be tied to specific backend components and deployed separately, like
the billing screens with billing business logic? There are two important
notes when it comes to grouping application components to be deployed as
separate artifacts:

  1. Inside monolithic applications, there are always shared libraries
    that will end up being deployed multiple times in a newer
    microservices model. The benefit of multiple deployments is that
    each microservice can follow its own update schedule. Just because a
    common library has a new feature doesn’t mean that everyone needs it
    and has to upgrade immediately.
  2. Unless there is a very obvious way to break the database apart (like
    multiple schemas) or it’s currently across multiple databases, just
    leave it be. Monolithic applications tend to cross-reference tables
    and build custom views that typically “belong” to one or more other
    components because the raw tables are readily available, and
    deadlines win far more than anyone would like to admit.

Upcoming Business Enhancements

Once you have gone through and made some progress, and perhaps
identified application components that could be split off into separate
deployable artifacts, it’s time to start making business enhancements
your number one avenue to initiate the redesign of the application into
smaller container-based applications which will eventually become your
microservices. If you’ve identified billing as the first area you want
to split off from the main application, then go through the requested
enhancements and bug fixes related to those application components. Once
you have enough for a release, start working on it, and include the
separation as part of the release. As you progress through the different
silos in the application, your team will become more proficient at
breaking down the components and making them in their own containers.

Conclusion

When a monolithic application is decomposed and deployed as a series of
smaller applications using containers, it is a whole new world of
efficiency. Scaling each component independently based on actual load
(instead of simply building for peak load), and updating a single
component (without retesting and redeploying EVERYTHING) will
drastically reduce the time spent in QA and getting approvals within
change management. Smaller applications that serve distinct functions
running on top of containers are the (much more efficient) way of the
future. Vince Power is a Solution Architect who has a focus on cloud
adoption and technology implementations using open source-based
technologies. He has extensive experience with core computing and
networking (IaaS), identity and access management (IAM), application
platforms (PaaS), and continuous delivery.

Tags: , Category: Uncategorized Comments closed

Sweating hardware assets at Experian with SUSE Enterprise Storage

Tuesday, 6 June, 2017

When Experian’s Business Information (BI) team overseeing infrastructure and IT functions saw the customers’ demand for better and more comprehensive data insights grow at an unprecedented rate, the company required a better storage solution that would enable them to maintain the same performance level. Implementing the SUSE Enterprise Storage solution gave Experian a starting platform for seamless capacity and performance growth that will enable future infrastructure and data projects without the company having to worry about individual servers hitting capacity.

The Problem

As a company facing increasing customer demands for better and more comprehensive insights, Experian began incorporating new data feeds into their core databases, enabling them to provide more in-depth insights and analytical tools for their clients. Experian went from producing a few gigabytes a month to processing hundreds of gigabytes an hour. This deep dive into big data analytics, however, came with limitations – how and where would Experian store larger data-sets while maintaining the same level of performance?

From the start, Experian had great success running ZFS as a primary storage platform, providing the flexibility to alternate between performance and capacity growth, depending on the storage medium. The platform enabled them to adapt to changing customer and business needs by seamlessly shifting between the two priorities.

But Experian’s pace of growth highlighted several weaknesses: First off, standalone NASes platforms were insufficient, becoming unwieldy and extremely time-consuming to manage. Shuffling data stores between devices took days to complete, causing disruptions during switchovers. The second challenge was a lack of high availability – Experian had developed robust business continuity and disaster recovery abilities, but in the process, had given up a certain degree of automation and responsiveness. Their systems could not accommodate the customer demand for 24/7 real-time access to data created by the advent of APIs and the digitalization of the economy. Experian’s third and greatest challenge was in replicating data. Data would often fluctuate and wind up asynchronous, creating a precarious balance – if anything started to lag, the potential for disruption and data loss was huge.

Experian had implemented another solution exclusively in their storage environment that had proven to be rock solid and equally flexible. While the team was happy with its performance, the new platform failed to fully address the true performance issue and devices and controller cards would still occasionally stall. As a company in the business of providing quick data access, the lag time raised serious concerns and presented obstacles in meeting client and business needs.

The Solution

Experian only saw one real short-term solution and moved to running ZFS on SUSE Linux Enterprise. This switch saved Experian time to find a more durable resolution, but was also fraught with limitations. Experian spent a number of weeks trying to find a permanent solution that would protect both their existing investment and future budget. To fix the limitation issue, Experian temporarily added another layer above their existing estate that would manage the distribution and replication of data.

As Experian was preparing to purchase the software and hardware needed to provide a more long-term solution, they come across SUSE’s new product offering – SUSE Enterprise Storage, version 3. Based on an open source project called Ceph, SUSE Enterprise Storage offered everything Experian needed with file and block storage and snapshots to run well on their existing HPE DL380 platform. SUSE was already Experian’s operating system of choice for a few years, proving to be reliable, fast and flexible. SUSE support teams were also responsive and reliable – this new solution offered the perfect product to meet Experian’s need.

The Outcome Experian’s initial SES build was modest, based around four DL380s for OSDs and four blades as MONs. Added to that were two gateway servers to provide block storage access from VMWare and Windows clients. SUSE Enterprise Storage’s performance met and exceeded Experian’s expectations – even being a cross site, real-life IOPS easily go into thousands. The benefit to Software-defined storage is that it allows clients to abstract problems away from hardware and to eliminate the issue of individual servers hitting capacity. By adding more disks to make space for more data and adding another server when access has slowed down without having to pinpoint exactly where they need to go, capacity planning is much less of a headache for Experian. Software-defined storage also enables Experian to sweat their server hardware for longer, making budgeting and capacity planning easier.

While SES doesn’t replace the flash-based storage Experian uses for databases, having a metro-area cluster means that business continuity is taken care of. Experian ended up with is a modern storage solution on modern hardware that gives the company a starting platform for both seamless capacity and performance growth that enables future infrastructure and data projects

Refactoring Your App with Microservices

Thursday, 1 June, 2017

So you’ve decided to use microservices. To help implement them, you may
have already started refactoring your app. Or perhaps refactoring is
still on your to-do list. In either case, if this is your first major
experience with refactoring, at some point, you and your team will come
face-to-face with the very large and very obvious question: How do you
refactor an app for microservices? That’s the question we’ll be
considering in this post.

Refactoring Fundamentals

Before discussing the how part of refactoring into microservices, it
is important to step back and take a closer look at the what and
when of microservices. There are two overall points that can have a
major impact on any microservice refactoring strategy. Refactoring =
Redesigning
A
business guide to effective container
management –
Refactoring a monolithic application into microservices and designing a
microservice-based application from the ground up are fundamentally
different activities. You might be tempted (particularly when faced with
an old and sprawling application which carries a heavy burden of
technical debt from patched-in revisions and tacked-on additions) to
toss out the old application, draw up a fresh set of requirements, and
create a new application from scratch, working directly at the
microservices level. As Martin Fowler suggests in this
post
, however,
designing a new application at the microservices level may not be a good
idea at all. One of the key takeaway points from Fowler’s analysis is
that starting with an existing monolithic application can actually work
to your advantage when moving to microservice-based architecture. With
an existing monolithic application, you are likely to have a clear
picture of how the various components work together, and how the
application functions as a whole. Perhaps surprisingly, starting with a
working monolithic application can also give you greater insight into
the boundaries between microservices. By examining the way that they
work together, you can more easily see where one microservice can
naturally be separated from another. Refactoring isn’t generic
There is no one-method-fits-all approach to refactoring. The design
choices that you make, all the way from overall architecture down to
code-level, should take into account the application’s function, its
operating conditions, and such factors as the development platform and
the programming language. You may, for example, need to consider code
packaging—If you are working in Java, this might involve moving from
large Enterprise Application Archive (EAR) files, (each of which may
contain several Web Application Archive (WAR) packages) into separate
WAR files.

General Refactoring Strategies

Now that we’ve covered the high-level considerations, let’s take a look
at implementation strategies for refactoring. For the refactoring of an
existing monolithic application, there are three basic approaches.

Incremental

With this strategy, you refactor your application piece-by-piece, over
time, with the pieces typically being large-scale services or related
groups of services. To do this successfully, you first need to identify
the natural large-scale boundaries within your application, then target
the units defined by those boundaries for refactoring, one unit at a
time. You would continue to move each large section into microservices,
until eventually nothing remained of the original application.

Large-to-Small

The large-to-small strategy is in many ways a variation on the basic
theme of incremental refactoring. With large-to-small refactoring,
however, you first refactor the application into separate, large-scale,
“coarse-grained” (to use Fowler’s term) chunks, then gradually break
them down into smaller units, until the entire application has been
refactored into true microservices.

The main advantages of this strategy are that it allows you to stabilize
the interactions between the refactored units before breaking them down
to the next level, and gives you a clearer view into the boundaries
of—and interactions between—lower-level services before you start
the next round of refactoring.

Wholesale Replacement

With wholesale replacement, you refactor the entire application
essentially at once, going directly from a monolith to a set of
microservices. The advantage is that it allows you to do a full
redesign, from top-level architecture on down, in preparation for
refactoring. While this strategy is not the same as
microservices-from-scratch, it does carry with it some of the same
risks, particularly if it involves extensive redesign.

Basic Steps in Refactoring

What, then, are the basic steps in refactoring a monolithic application
into microservices? There are several ways to break the process down,
but the following five steps are (or should be) common to most
refactoring projects.

**(1) Preparation: **Much of what we have covered so far is preparation.
The key point to keep in mind is that before you refactor an existing
monolithic application, the large-scale architecture and the
functionality that you want to carry over to the refactored,
microservice-based version should already be in place. Trying to fix a
dysfunctional application while you are refactoring it will only make
both jobs harder.

**(2) Design: Microservice Domains: **Below the level of large-scale,
application-wide architecture, you do need to make (and apply) some
design decisions before refactoring. In particular, you need to look at
the style of microservice organization which is best suited to your
application. The most natural way to organize microservices is into
domains, typically based on common functionality, use, or resource
access:

  • Functional Domains. Microservices within the same functional
    domain perform a related set of functions, or have a related set of
    responsibilities. Shopping cart and checkout services, for example,
    could be included in the same functional domain, while inventory
    management services would occupy another domain.
  • Use-based Domains. If you break your microservices down by use,
    each domain would be centered around a use case, or more often, a
    set of interconnected use cases. Use cases are typically centered
    around a related group of actions taken by a user (either a person
    or another application), such as selecting items for purchase, or
    entering payment information.
  • Resource-based Domains. Microservices which access a related
    group of resources (such as a database, storage, or external
    devices) can also form distinct domains. These microservices would
    typically handle interaction with those resources for all other
    domains and services.

Note that all three styles of organization may be present in a given
application. If there is an overall rule at all for applying them, it is
simply that you should apply them when and where they best fit.

(3) Design: Infrastructure and Deployment

This is an important step, but one that is easy to treat as an
afterthought. You are turning an application into what will be a very
dynamic swarm of microservices, typically in containers or virtual
machines, and deployed, orchestrated, and monitored by an infrastructure
which may consist of several applications working together. This
infrastructure is part of your application’s architecture; it may (and
probably will) take over some responsibilities which were previously
handled by high-level architecture in the monolithic application.

(4) Refactor

This is the point where you actually refactor the application code into
microservices. Identify microservice boundaries, identify each
microservice candidate’s dependencies, make any necessary changes at
the level of code and unit architecture so that they can stand as
separate microservices, and encapsulate each one in a container or VM.
It won’t be a trouble-free process, because reworking code at the scale
of a major application never is, but with sufficient preparation, the
problems that you do encounter are more likely to be confined to
existing code issues.

(5) Test

When you test, you need to look for problems at the level of
microservices and microservice interaction, at the level of
infrastructure (including container/VM deployment and resource use), and
at the overall application level. With a microservice-based application,
all of these are important, and each is likely to require its own set of
testing/monitoring tools and resources. When you detect a problem, it is
important to understand at what level that problem should be handled.

Conclusion

Refactoring for microservices may require some work, but it doesn’t
need to be difficult. As long as you approach the challenge with good
preparation and a clear understanding of the issues involved, you can
refactor effectively by making your app microservices-friendly without
redesigning it from the ground up.

Tags: Category: Uncategorized Comments closed

New Machine Driver from cloud.ca!

Wednesday, 24 May, 2017

Cloud.ca machine
driverOne of the great
benefits of the Rancher container
management platform is that it runs on any infrastructure. While it’s
possible to add any Linux machine as a host using our custom setup
option, using one of the machine drivers in Rancher makes it especially
easy to add and manage your infrastructure.

Today, we’re pleased to
have a new machine driver available in Rancher, from our friends at
cloud.ca. cloud.ca is a regional cloud IaaS for
Canadian or foreign businesses requiring that all or some of their data
remain in Canada, for reasons of compliance, performance, privacy or
cost. The platform works as a standalone IaaS and can be combined with
hybrid or multi-cloud services, allowing a mix of private cloud and
other public cloud infrastructures such as Amazon Web Services. Having
the cloud.ca driver available within Rancher makes it that much easier
for our collective users to focus on building and running their
applications, while minding data compliance requirements. To access the
cloud.ca machine driver, navigate to the “Add Hosts” screen within
Rancher, select “Manage available machine drivers“. Click the arrow to
activate the driver; it’ll be easily available for subsequent
deployments. cloud.ca Click the > arrow to activate the
cloud.ca machine driver You can learn more about using the
driver and Rancher together on the** cloud.ca
blog
**.
If you’re headed to Devops Days
Toronto
(May
25-26) as well, we encourage you to visit the cloud.ca booth, where you
can see a demo in person! And as always, we’re happy to hear from
members of our community on how they’re using Rancher. Reach out to us
any time on our forums, or on Twitter
@Rancher_Labs!

Tags: , Category: Uncategorized Comments closed

Do Microservices Make SOA Irrelevant?

Tuesday, 9 May, 2017

Is service-oriented architecture, or SOA, dead? You may be tempted to
think so. But that’s not really true. Yes, SOA itself may have receded
into the shadows as newer ideas have come forth, yet the remnants of SOA
are still providing the fuel that is propelling the microservices market
forward. That’s because incorporating SOA principles into the design and
build-out of microservices is the best way to ensure that your product
or service offering is well positioned for the long term. In this sense,
understanding SOA is crucial for succeeding in the microservices world.
In this article, I’ll explain which SOA principles you should adopt when
designing a microservices app.

Introduction

In today’s mobile-first development environment, where code is king, it
is easier than ever to build a service that has a RESTful interface,
connect it to a datastore and call it a day. If you want to go the extra
mile, piece together a few public software services (free or paid), and
you can have yourself a proper continuous delivery pipeline. Welcome to
the modern Web and your fully buzzworthy-compliant application
development process. In many ways, microservices are a direct descendant
of SOA, and a bit like the punk rock of the services world. No strict
rules, just some basic principles that loosely keep everyone on the same
page. And just like punk rock, microservices initially embraced a
do-it-yourself ethic, but has been evolving and picking up some
structure which moved microservices into the mainstream. It’s not just
the dot com or Web companies that use microservices anymore—all
companies are interested.

Definitions

For the purposes of this discussion, the following are the definitions I
will be using.

Microservices: The implementation of a specific business function,
delivered as a separate deployable artifact, using queuing or a RESTful
(JSON) interface, which can be written in any language, and that
leverages a continuous delivery pipeline.

SOA: Component-based architecture which has the goal of driving
reuse across the technology portfolio within an organization. These
components need to be loosely coupled, and can be services or libraries
which are centrally governed and require an organization to use a single
technology stack to maximize reusability.

Positive things about microservices-based development

As you can tell, microservices possess a couple of distinct features
that SOA lacked, and they are good:

Allowing smaller, self-sufficient teams to own a product/service
that supports a specific business function has drastically improved
business agility and IT responsiveness (to any directions that the
business units they support) want to take.

Automated builds and testing, while possible under SOA, are now
serious table stakes.

Allowing teams to use the tools they want, primarily around which
language and IDE to use.

Using-agile based development with direct access to the business.
Microservices and mobile development teams have successfully shown
businesses how technologists can adapt to and accept constant feedback.
Waterfall software delivery methods suffered from unnecessary overhead
and extended delivery dates as the business changed while the
development team was off creating products that often didn’t meet the
business’ needs by the time they were delivered. Even iterative
development methodologies like the Rational Unified Process (RUP) had
layers of abstraction between the business, product development, and the
developers doing the actual work.

A universal understanding of the minimum granularity of a service.
There are arguments around “Is adding a client a business function, or
is client management a business function?” So it isn’t perfect, but at
least both can be understood by the business side that actually runs the
business. You may not want to believe it, but technology is not the
entire business (for most of the world’s enterprises anyway). Back in
the days when SOA was the king on the hill, some services performed
nothing but a single database operation, and other services were adding
a client to the system, which led to nothing but confusion from business
when IT did not have a consistent answer.

How can SOA help?

Want to learn more about
Docker, Kubernetes, and Rancher? Join us for free online
training After reading those definitions, you are probably
thinking, “Microservices sounds so much better.” You’re right. It is the
next evolution for a reason, except that it threw away a lot of the
lessons that were hard-learned in the SOA world. It gave up all the good
things SOA tried to accomplish because the IT vendors in the space
morphed everything to push more product. Enterprise integration patterns
(which define how new technologies or concepts are adopted by
enterprises) are a key place where microservices are leveraging the work
done by the SOA world. Everyone involved in the integration space can
benefit from these patterns, as they are concepts, and microservices are
a great technological way to implement them. Below, I’ve listed two
other areas where SOA principles are being applied inside the
microservices ecosystem to great success.

API Gateways (née ESB)

Microservices encourage point-to-point connections, and that each client
take care of their own translations for dates and other nuanced things.
This is just not sustainable as the number of microservices available
from most companies skyrockets. So in comes the concept of an Enterprise
Service Bus (ESB), which provides a means of communication between
different application in an SOA environment. SOA originally intended the
ESB to be used to carry things between service components—not to be
the hub and spoke of the entire enterprise, which is what vendors
pushed, and large companies bought into, and left such a bad taste in
people’s mouths. The successful products in the ESB have changed into
today’s API gateway vendors, which is a centralized way for a single
organization to manage endpoints they are presenting to the world, and
provide translation to older services (often SOA/SOAP) that haven’t been
touched in years but are vital to the business.

Overarching standards

SOA had WS-* standards. They were heavy-handed, but guaranteed
interoperability (mostly). Having these standards in place, especially
the more common ones like WS-Security and WS-Federation, allowed
enterprises to call services used in their partner systems—in terms
that anyone could understand, though they were just a checklist.
Microservices have begun to formalize a set of standards and the vendors
that provide the services. The OAuth and OpenID authentication
frameworks are two great examples. As microservices mature, building
everything in-house is fun, fulfilling, and great for the ego, but
ultimately frustrating as it creates a lot of technical debt with code
that constantly needs to be massaged as new features are introduced. The
other side where standards are rapidly consolidating is API design and
descriptions. In the SOA world, there was one way. It was ugly and
barely readable by humans, but the Web service definition language
(WSDL), a standardized format for cataloguing network services, was
universal. As of April 2017, all major parties (including Google, IBM,
Microsoft, MuleSoft, and Salesforce.com) involved in providing tools to
build RESTful APIs are members of the OpenAPI Initiative. What was once
a fractured market with multiple standards (JSON API, WASL, RAML, and
Swagger) is now becoming a single way for everything to be described.

Conclusion

SOA originated as a set of concepts, which are the same core concepts as
microservices architecture. Where SOA fell down was driving too much
governance and not enough “Just get it done.” For microservices to
continue to survive, the teams leveraging them need to embrace their
ancestry, continue to steal the best of the ideas, and reintroduce them
using agile development methodologies—with a healthy dose of
anti-governance to stop SOA
Governance

from reappearing. And then, there’s the side job of keeping ITIL and
friends safely inside the operational teams where they thrive. Vince
Power is a Solution Architect who has a focus on cloud adoption and
technology implementations using open source-based technologies. He has
extensive experience with core computing and networking (IaaS), identity
and access management (IAM), application platforms (PaaS), and
continuous delivery.

Tags: Category: Rancher Blog Comments closed