Cloud, Managed Services & Infrastructure Ace, Aptira, is SUSE’s First Solution Partner in APJ

Monday, 4 December, 2017

Becoming a Solution Partner in the SUSE Partner Program is no mean feat – it is a status awarded to select partners. As the highest partner tier in the Program, it recognises partners who have the deep technical expertise and commitment to building only the best solutions that provide maximum efficiency and high availability to demanding enterprise business clients.

“Aptira embodies what we are looking for in a Solution Partner”, says Mark Salter, VP Channels for SUSE. “They have superior technical knowhow and we are impressed with their commitment to providing their customers with the best solution possible with no vendor lock-in, which is a philosophy that SUSE as the open, opensource company, also subscribes to.”

“We like to partner with like-minded companies. In SUSE, we have a mutually complementary portfolio, and a joint desire to concentrate on offerings that are commercially better for customers”, says Tristan Goode, Founder, CEO and Board Director of Aptira. “Becoming SUSE’s first Solution Partner in APJ for Storage shows that we do much more than just OpenStack. We also offer a full range of technical services from consulting, solution delivery, systems integration through to managed services and support”.

Of Solutionauts and Kittens

Aptira are headquartered in New South Wales, Australia but now have Solutionauts (that’s expert advisors who use the best of cutting edge tech to solve real world problems) spread across APJ.

Tristan describes Aptira’s approach, “We don’t believe in cookie cutter packages, so each solution is comprehensive and tailor made to suit the customers’ requirements.”

The Aptira team!

Like SUSE, Aptira are also on the Board for the OpenStack Foundation. As a SUSE partner since 2016, Aptira has proven it has some of the best Open Source skills in Australia, with expertise across different Open Source projects. They have recently engaged within the Tertiary education sector with SUSE Enterprise Storage.

And to top it off, which other company would have dedicated page (and Twitter feed) to kittens? According to Aptira, it all started when CTO Roland’s cat invited her way into meetings, and gained the title of Chief Feline Officer. Although their beloved Chief Feline Officer has passed on, the company has carried on it’s legacy by showcasing all the cool cats belonging to various staff members.

 

 

OpenStack Adoption on a Torrid Pace

Tuesday, 28 November, 2017

What’s Driving Hybrid Cloud Adoption?

Over the next 2 years respondents predicted hybrid cloud growth would be the fastest (66%), followed by private cloud (55%) and public cloud (36%).  The use case driving hybrid cloud adoption is the desire, of 89% of those surveyed, to do development in the public cloud while hosting production workloads in their private cloud.  Key reasons for this approach include security (63%), data sovereignty (52%), performance related issues (52%) and cost (30%).    Many also wanted the freedom to place workloads where they make the most sense in terms of cost and performance.   For example,  I/O intensive workloads can drive up the networking costs of public clouds quickly, while development projects are often short in duration and require less bandwidth which may make the public cloud more cost efficient.

Is OpenStack the Answer?

OpenStack is following an adoption curve very similar to Linux.  Initial implementations by industry heavyweights has given way to production environments in companies of all sizes.   In this survey, 23% of organizations were in production (up from 15% in 2015), with 37% in testing and 22% expected to do so within 12 months.   This means that 82% of respondents are either now using or planning to use OpenStack.   The reasons for this rapid adoption are fairly well known:  flexibility (61%), reduced cost (52%), agility (47%), adaptability / integration (46%) and freedom of choice (44%).   OpenStack is one of the few platforms that enables them to combine legacy investments in hardware and software with leading-edge technologies including Platform as a Service (PaaS), containers, workload automation (including Kubernetes), and software-defined infrastructure.   Most of the latest innovations in IT are either introduced or available on OpenStack.

Why is There a Perception that OpenStack is Difficult?

The majority of the respondents to this survey (82%) thought that deploying OpenStack was difficult.   I believe this is caused by two reasons:  1. They chose to download the upstream code and build it from scratch (55%) or 2.  Installation of OpenStack has been very difficult in the past and that experience has lingered.   Building your own distribution of OpenStack from the upstream code is not for the faint of heart.   It requires a rather large team of experts that can assemble all of the different pieces in a production-worthy environment.   Drivers from all of the hardware/software partners have to be tested and integrated.   Further, maintenance is a nightmare – OpenStack has a 6 month release cycle, so you’ll need to repeat this exercise frequently.    SUSE has made great strides in addressing these issues with award winning ease of installation and non-disruptive upgrades.

Conclusion:  Time to Include SUSE OpenStack Cloud on Your Short List of Technologies

OpenStack may no longer be the “shiny object” it once was, which conversely means that hype has given way to a cloud platform that has stabilized into a production-worthy environment.   Take control of your IT destiny by adopting the one platform that seamlessly integrates legacy and cloud-native architectures with no vendor lock-in.    SUSE OpenStack Cloud is the evolutionary path to your next generation Software Defined Data Center.  To learn more about SUSE OpenStack Cloud and the benefits our customers have gained, check out our new Total Economic Impact Study for SUSE OpenStack Cloud.

 

What is SUSE up to in Madrid? Check us out at HPE Discover.

Tuesday, 14 November, 2017

I`m sure you are anxiously waiting to see the cameleón verde at the SUSE booth in Madrid, right? I know I am. 🙂

Once again SUSE is coming to HPE Discover and we welcome the opportunity to meet HPE Employees, Partners and Customers on HPE`s premier conference.

SUSE`s strategic alliance with HPE has come a long way in these past few years and now that the Micro Focus (SUSE`s parent company) merger with HPE Software business is completed we are even closer partners.

 Remember that the merger announcement resulted in SUSE being named HPE`s preferred partner for Linux, OpenStack and Cloud Foundry solutions last year?

Let me take this opportunity to recap some other recent items:

2016/2017SUSE Enterprise Storage is HPE`s 1st certified, supported, and sold Ceph-based storage solution and also the 1st object storage solution certified on HPE Synergy

 

Sep 2016SUSE was named HPE`s preferred partner for Linux, OpenStack and Cloud Foundry solutions

 

Mar 2017 – SUSE completes the Acquisition of OpenStack IaaS and Cloud Foundry PaaS Assets from HPE

 

Aug 2017 – HPE Proliant Gen 10 Server and SUSE achieved yet another performance benchmark

 

Sep 2017 – Micro Focus Completes Merger with HPE Software

 

 

At HPE Discover Madrid, SUSE will be at booth #122 (right next to Micro Focus). Come by, say hi and enter to win prizes.

Video Game: 1st place on Tux Racer game will take a Hover Camera Passport Drone home every day.

 In-booth rack simulation: listen to our joint solutions, play with the touch screen unit and enter for a chance to win Bluetooth ear buds.

More: since we will be approaching the Holiday Season, you`ll take home a chocolate advent calendar for a sweet countdown and also a plush chameleon if you are fast enough (those run out pretty quickly).

 

Now, we should also some great sessions there. If you are interested in Software-Defined Storage make sure you check:

T4630 Software-defined storage powered by Ceph to future-proof your data center with SUSE and HPE

Tuesday November, 28 – 16:30 – 17:00

 

And, if you are curious about open source and Composable infrastructure, we will talk about all SUSE integrations with HPE OneView as a part of the HPE Synergy Ecosystem:

T4632 HPE OneView and SUSE — learn where open source and Composable infrastructure intersect

Wednesday November, 29 – 13:30 – 14:00

 

Learn more about our presence at HPE Discover HERE and follow #HPEDiscover on Twitter.

If you want to schedule a meeting with us in Madrid, please e-mail hpeteam@suse.com and we`ll be happy to arrange details.

Keep up with our Strategic Partnership at suse.com/hpe and we will see you in Madrid!

 

The Future of Storage: four reasons why open source storage should be part of your strategy.

Monday, 13 November, 2017

The comfortable world of storage appliance vendors of ten years ago is under assault from a wave of technical and market changes that are rewriting how storage and data protection processes are managed and governed. Enterprises need to plan now to deal with those changes – or face a future where data is locked into vendors’ platforms and products, hard or expensive to release, and business agility is threatened by a lack of data mobility.

The competition between storage vendors – in the cloud, on premise and across both is fierce, and technology is changing radically: in this blog, we argue that enterprises need to maintain internal skill sets to take advantage of market opportunity, develop cost savings, and avoid the threat of vendor lock down – with the accompanying loss of agility and cost disadvantage that this can entail.

Exponential data growth: the challenge that just won’t go away

Data growth is not like economic growth or growth on savings in a bank – a sluggish few percentage points grudgingly added over long periods – it’s incredibly rapid, long into double figures, and the interest rate is compound. In their attempts to describe this growth, media outlets and analysts alike pile on the superlatives and adjectives, with many favouring the term ‘exponential’ a word previously confined to advanced maths students and lecturers, and even ‘explosive’ – bringing to the mind the rate at which a fireball expands when a bomb is detonated – with the added connotations of damage.

Alongside the charts and the forecasts and the long technical words are the anecdotes – the stories used to illustrate growth – here’s a few choice examples:

– Decoding the human genome was one of the biggest science stories of a generation. It took ten years: on today’s tech it could be done in two weeks.

– In 2010, the economist published a story about Walmart’s systems handling a staggering 75 million transactions an hour, feeding into databases estimated to hold a total of 2.5 petabytes. This year in 2017, Forbes cites them as processing 2.5 petabytes an hour.

– When the Sloan Digital Sky Survey began in 2000, more data was collected in the first few weeks than had been previously been gathered in the entire history of astronomy. When the large synoptic Survey Telescope goes online in Mexico 2022, it will start with 16 petabytes of storage – scaling immediately – and the volume of data collected will be so large as to be impossible for humans to analyse it – this is AI or bust on a system with 100 terraflops of computing power.

– Benjamin Franklin is widely credited with saying ‘nothing in this world can be said to be certain, with the exception of death and taxes’. The typical storage pro added data growth to that list a long time ago. And we haven’t even mentioned the IoT.

Storage costs might have come down a lot over the last few years, but those savings are more than negated by the volume growth rate, creating a major issues for many, as reduced capital spending has become the norm – as Fortune reports, CapEx spending is on the way down – and not just for IT. ‘Do more with less’ says the CFO to the CIO, and the CIO to everyone else. We’ve reached the limits of ‘if it ain’t broke don’t fix it’. Everyone knows things have to change. But how?

How the hyperscalers broke the mould

Given a free hand, its tempting to think that most appliance manufactures would have continued exactly as they always had done: continuing to sell expensive proprietary hardware operated by expensive proprietary software, with a built in limitation on interoperability – frustrating the movement of data from one platform to another by design. Coupled of course, with a pricing model that encouraged the customer to ‘bet the farm’: the vendor offers customers who give them the majority of the estate aggressive discount – locking them down, and the competition out with the same strategy. Hence the expression of a Dell or HP or IBM ‘shop’. Good business for big boys, not so good for the customer, who can end up with a limited choice of product features, and may find long term they have backed the wrong horse.

It was all good till the hyperscalers broke the mould: the economics of massive cloud provision doesn’t allow for the margins of appliance manufactures. Wired reported that Google’s software consists of something approaching 2bn lines of code: if they were paying Microsoft for the server operating system licenses and HP for the storage space we’d be seeing a very large difference on those three firms’ comparative profitability, and we’d all sell our Google shares.

Faced with this unsustainable bill for proprietary hardware and software, Google (grudgingly!) teamed up with the likes of Facebook in the Open Compute Project, and outsourced all server provision to unheard of companies in Asia on its own designs: thus the hyperscalers drove unprecedented use of the ‘white box’ and made it the norm. How else could Cloud providers compete with highly virtualised on-premise infrastructure? The cost saving have to come from somewhere. The scale of data centres got so large it became possible to rent them out – giving us the Cloud.

Today we have the world of hybrid cloud, where pretty much all major enterprises have a mixed environment: some hardware on premise, some in the cloud, some software run completely by others as SaaS, increasingly complex APIs connecting it all up; speed, and scale at the touch of a button, but, difficult to control and architect – and here we go again as the competing vendors try to get us all to ‘bet the farm’ as our tech Goliaths swing their weight around – with the occasional David thrown into the mix.

Software defined – the place where the storage vendors have staked their futures

So, if massive data volume growth and the example of the hyperscale reaction to it has caused a rethink in enterprises’ approach to storage, where now lies the future, and how are vendors reacting? SUSE would argue, as would practically every analyst voice in the world, that the answer lies in software defined.

The simple truth is that storage at scale on proprietary hardware is unaffordable. The reaction of storage vendors is to confine hardware sales to the high performance end of the market – massive throughput, low latency storage for workloads literally coupled to the servers running the compute in the same stack – frequently hyperconverged; all else is about the software. ‘We are now software companies’, say the storage hardware vendors, ‘work with us, and you will need only one software layer for your entire infrastructure – you know that change that took over compute with the dawn of virtualisation? well we are bringing that power to your storage’. Pooled resources, increased utilisation, cost savings. This is inevitably going to cause consolidation: there will be winners and losers in this environment.

And it doesn’t take a genius to work out there’s a problem with this approach from the enterprise’s perspective: it only solves half the cost problem because you are still paying for expensive proprietary software. Moving to software defined storage with any vendor will save you some money, but not anything like as much as you could with open source.

How enterprises hedge their bets in an uncertain world.

Lock down is a perennial fear for enterprises: nobody wants to find having bet the farm that they at the mercy of a vendor who knows the customer cannot operate without them. So how do you get your storage strategy right? Storage strategy has to follow the main enterprise strategy – and for the following reasons, open source should be part of your strategy.

#1. ‘Betting the farm’ is dangerous. Maintain multiple storage vendors.
It can be tempting to take hold of a vendor’s short and medium term pricing strategy by placing all of your business with them, generating operational simplicity in the process – one set of storage tools and processes does make things easier. It may look particularly attractive in the Cloud. But if you go ‘all in’ you are playing poker with your storage budget and gambling that your vendor partners will not punish you with price increases later.

#2. Pay attention to the Cloud war – it has not been won
Amazon unquestionably currently has the lead in adoption over Microsoft Azure and Google Compute. Nevertheless, everyone knows that Amazon is playing a ‘long game’ of profit tomorrow, not today. Hence, many have a foot in Azure or Google Compute, even when they have a leg in AWS, because there must be an exit plan. But this comes with a price – and that price is operational complexity – a price that can be particularly high in the world of storage, where the new pricing models can be about how much data you move down the wire rather than how much own.

#3. Maintain and expand your skill sets to avoid lock down
Its tempting to reduce complexity by standardising on a small set of suppliers. The upside is that you get simplicity – one approach to storage means its easy to train staff, some are no longer necessary in a cloud scenario, and arguably, you can get on with that ‘core business’ proprietary vendors love to tell you about of serving customers. However, if you don’t know how to exit AWS to move to Azure without crippling operations, if you don’t know how much it costs to repatriate data, and if you’ve got nowhere to put it when you do, you are locked down. See point #1.

#4. Use open source software defined storage – or pay more.
If you use only cloud or only proprietary software, your software and hardware costs will always be greater than they need to be. This is a simple fact – open source means costs savings from moving to commodity hardware, and the total elimination of proprietary software costs. Proprietary storage vendors will tell you – rightly – that cost can reappear as skilled headcount, consultancy and support. But then, if you don’t have skilled headcount, how are you going to maintain your capability to switch cloud providers, and how are you going to assess which vendors to use? Well, given that the obvious answer is hire expensive consultants, we’d argue that the proprietary sales argument is somewhat duplicitous.

Visit SUSE @ SAP TechEd Barcelona to talk about transition to SAP S/4HANA

Thursday, 9 November, 2017

Moving to SAP S/4HANA offers exciting possibilities, but it also means transitioning your SAP infrastructure. Because your SAP systems are the heart of your business, we know what a big deal this move can be. Whether you’re a loyal SUSE customer or looking for information about transitioning to SAP S/4HANA, SAP TechEd is a great place to learn more about how SUSE can help optimize your digital transformation.

Come to our Booth 8.1 P07 to get your questions answered:

  • What does it mean for me to move to Linux?
  • How shall my operating environment look like?
  • What are the best high availability and disaster recovery scenarios for my SAP HANA infrastructure?
  • Shall I move to the public or private cloud?

SUSE powers SAP® Cloud Platform for Enterprise Customers

Did you know that SUSE OpenStack Cloud and SUSE Enterprise Storage are key elements of SAP Cloud Platform? Learn more about the architecture and talk to our technical experts at the booth.

Read the full press release.

Win a Raspberry Pi or Amazon Echo

By asking us question you have the chance to win a Raspberry Pi or Amazon Echo. Or you join us for some fun and exciting activities.

Attend our session

Lecture Session

Networking Sessions

  • SUSE & HPE: SUSE Live Kernel Patching with HPE Converged Systems for SAP HANA (HPE Booth P20), Tue, Nov 14, 15:00-15:30, Wed Nov 15, 14:00-14:30; Thu, Nov 16, 14:00-14:30

Don’t miss our special events with partners

SAP on Azure Cloud Workshop

You are cordially invited to attend this exclusive invitation-only 1-day technical workshop on SAP® on Microsoft Azure. Learn about latest technology from SAP, SUSE and Microsoft.

  • Location: Monday, November 13 from 10:00am-6:00pm
  • Adress: Fairmont Rey Juan Carlos I Hotel
  • Costs: The participation is free of charge
  • Registration: Please register by sending an email to v-naelof@microsoft.com and provide Company Name, Job Title, Country and your Phone Number

SAP Executive Technology Summit

Similar to last time, an elite group of customers, executive leaders from SAP, Hewlett Packard Enterprise, and SUSE, and special guest speakers will gather for an insightful and interactive, invitation-only program integrated into the first day of SAP TechEd. This event will be held at the exclusive Porta Fira Hotel, which is within walking distance of the SAP TechEd event.

  • Location: Porta Fira Hotel (next to Fira de Barcelona)
  • Address: Plaza Europa, 45, 08908 Hospitalet de Llobregat, Barcelona
  • Date and time: November 13, 2017, 12:00 p.m. to 5:00 p.m.

 

What’s All This about SAP S/4HANA?

Tuesday, 31 October, 2017

If you’re an SAP customer, you’ve surely heard of SAP S/4HANA by now, and the buzz has probably reached those who are just considering SAP enterprise applications. Whether you’re an SAP customer or not, knowing what’s happening could be vital for your business.

First, you have to know a little bit about SAP HANA, SAP’s in-memory database. SAP HANA transforms transactions, analytics and predictive and spatial processing so businesses can operate in real time. It allows you to simultaneously handle real-time trans­actions and analytic workloads with extreme speed. And because it’s such an improvement over traditional databases, SAP announced a few years ago that all its products will now be based on SAP HANA. This SAP Business Suite 4 SAP HANA (S/4HANA) is what people are talking about.

SAP has announced that it will extend support for its traditional enterprise resources planning solutions running on Oracle, Microsoft and other third-party databases through 2025. After that date, businesses will have to rely on SAP S/4HANA. That deadline is what’s driving the urgency behind migrations at organizations around the world.

I know that migrating any SAP system can be quite a big deal, so I feel for those IT teams in the trenches, trying to figure out how to make it all happen smoothly. And to make it go smoothly, there’s one other thing you have to understand about SAP HANA: It runs only on Linux.

For organizations that still rely on UNIX and for those who are traditionally a Microsoft-only shop, this can be frightening news. But it shouldn’t be. There are good reasons why SAP HANA runs on Linux and why it can be a simple and long-term sustainable move.

New business models that depend on mobility, cloud computing and Internet of Things devices are increasing complexity in the modern data center. That’s one reason for SAP’s move to standardize, simplify and innovate with open source technologies that run on Linux. Open source technologies are built to be interoperable by their very nature. Linux already leads enterprises’ shift to the cloud, so it’s a natural choice for many as they consider new databases and big-data systems.

And here at SUSE, we make the transition as painless as possible. SUSE® Linux Enterprise Server for SAP Applications features an SAP system Installation Wizard, automated recovery for SAP HANA systems and other features that make it easier for you to take your first steps toward SAP S/4HANA. I’ll be explaining why SUSE is the smart choice for organizations facing a move to SAP S/4HANA in an upcoming blog.

For now, just know that the move to SAP S/4HANA has big implications. With SAP’s solution for transforming to a digital enterprise, you’ll be able to bring real-time speed to your entire organization and give it a nimble, superfast, data-crunching platform. And when you put it that way, there’s really only one question left: Why wait until 2025?

Two Dot Awesome

Wednesday, 25 October, 2017

Rancher 2.0 is coming, and it’s amazing.

In the Beginning…

When Rancher released 1.0 in early 2016, the container landscape looked
completely different. Kubernetes wasn’t the powerhouse that it is today.
Swarm and Mesos satisfied specific use cases, and the bulk of the
community still used Docker and Docker Compose with tools like Ansible,
Puppet, or Chef. It was still BYOLB (bring your own load balancer), and
volume management was another manual nightmare. Rancher stepped in with
Cattle, and with it we augmented Docker with overlay networking,
multi-cloud environments, health checking, load balancing, storage
volume drivers, scheduling, and other features, while keeping the format
of Docker Compose for configuration. We delivered an API, command-line
tools, and a user interface that made launching services simple and
intuitive. That’s key: simple and intuitive. With these two things, we
abstracted the complexities of disparate systems and offered a way for
businesses to run container workloads without having to manage the
technology required to do so. We also gave the community the ability to
run Swarm, Kubernetes, or Mesos, but we drew the line at managing the
infrastructure components and stepped back, giving operators the ability
to do whatever they wanted within each of those systems. “Here’s
Kubernetes,” we said. “We’ll keep the lights on but, beyond that, using
Kubernetes is up to you. Have fun!” If you compress the next 16 months
into a few thoughts, looking only at our user base, we can say that
Kubernetes adoption has grown dramatically, while Mesos and Swarm
adoption has fallen. The functionality of Kubernetes has caught up with
the functionality of Cattle and, in some areas, has surpassed it as
vendors develop Kubernetes integrations that they aren’t developing
directly for Docker. Many of the features in Cattle have analogs in
Kubernetes, such as label-based selection for scheduling and load
balancing, resource limits for services, collecting containers into
groups that share the same network space, and more. If we take a few
steps back and look at it objectively, one might say that by developing
Cattle-specific services, we’re essentially developing a clone of
Kubernetes at a slower pace than Kubernetes themselves. Rancher
2.0
changes that.

The Engine Does Not Matter

First, let me be totally clear: our beloved Cattle is not going
anywhere, nor is RancherOS or Longhorn. If you get into your car and
drive somewhere, what matters is that you get there. Some of you might
care about the model of your car or its top speed, but most people just
care about getting to the destination. Few people care about the engine
or its specifics. We only look under the hood when something is going
wrong. The engine for Cattle in Rancher 1.x was Docker and Docker
Compose. In Rancher 2.x, the engine is Kubernetes, but it doesn’t
matter. In Rancher 1.x, you can go to the UI or the API and deploy
environments with an overlay network, bring up stacks and services,
import docker-compose.yml files, add load balancers, deploy items from
the Catalog, and more. In Rancher 2.x, guess what you can do? You can do
the exact same things, in the exact same way. Sure, we’ve improved the
UI and changed the names of some items, but the core functionality is
the same. We’re moving away from using the term Cattle, because now
Cattle is no different from Kubernetes in practice. It might be
confusing at first, but I assure you that a rose by any other name still
smells as sweet. If you’re someone who doesn’t care about Kubernetes,
then you can continue not caring about it. In Rancher 1.x, we deployed
Kubernetes into an environment as an add-on to Rancher. In 2.x, we
integrated Kubernetes with the Rancher server. It’s transparent, and
unless you go looking for it, you’ll never see it. What you will see
are features that didn’t exist in 1.x and that, frankly, we couldn’t
easily build on top of Docker because it doesn’t support them. Let’s
talk about those things, so you can be excited about what’s coming.

The Goodies

Here is a small list of the things that you can do with Rancher 2.x
without even knowing that Kubernetes exists.

Storage Volume Drivers

In Rancher 1.x, you were limited to named and anonymous Docker volumes,
bind-mounted volumes, EBS, NFS, and some vendor-specific storage
solutions (EMC, NetApp, etc.). In Rancher 2.x, you can leverage any
storage volume driver supported by Kubernetes. Out of the box, this
brings NFS, EBS, GCE, Glusterfs, vSphere, Cinder, Ceph, Azure Disk,
Azure File, Portworx, and more. As other vendors develop storage drivers
for Kubernetes, they will be immediately available within Rancher 2.x.

Host Multitenancy

In Rancher 1.x, an environment was a collection of hosts. No host could
exist in more than one environment, and this delineation wasn’t always
appropriate. In Rancher 2.x, we have a cluster, which is a collection of
hosts and, within that cluster, you can have an infinite number of
environments that span those hosts. Each environment comes with its own
role-based access control (RBAC), for granular control over who can
execute actions in each environment. Now you can reduce your footprint
of hosts and consolidate resources within environments.

Single-Serving Containers

In Rancher 1.x, you had to deploy everything within a stack, even if it
was a single service with one container. In Rancher 2.x, the smallest
unit of deployment is a container, and you can deploy containers
individually if you wish. You can promote them into services within a
common stack or within their own stacks, or you can promote them to
global services, deployed on every host.

Afterthought Sidekicks

In Rancher 1.x, you had to define sidekicks at the time that you
launched the service. In Rancher 2.x, you can add sidekicks later and
attach them to any service.

Rapid Rollout of New Technology

When new technology like Istio or linkerd hits the community, we want to
support it as quickly as possible. In Rancher 1.x, there were times
where it was technologically impossible to support items because we were
built on top of Docker. By rebasing onto Kubernetes, we can quickly
deploy support for new technology and deliver on our promise of allowing
users to get right to work using technology without needing to do the
heavy lifting of installing and maintaining the solutions themselves.

Out-of-the-Box Metrics

In Rancher 1.x, you had to figure out how to monitor your services. We
have some monitoring extracted from Docker statistics, but it’s a
challenge to get those metrics out of Rancher and into something else.
Rancher 2.x ships with Heapster, InfluxDB, and Grafana, and these
provide per-node and per-pod metrics that are valuable for understanding
what’s going on in your environment. There are enhancements that you can
plug into these tools, like Prometheus and Elasticsearch, and those
enhancements have templates that make installation fast and easy.

Broader Catalog Support

The Catalog is one of the most popular items in Rancher, and it grows
with new offerings on a weekly basis. Kubernetes has its own
catalog-like service called Helm. In Rancher 1.x, if something wasn’t in
the Catalog, you had to build it yourself. In Rancher 2.x, we will
support our own Catalog, private catalogs, or Helm, giving you a greater
pool of pre-configured applications from which to choose.

We Still Support Compose

The option to import configuration from Docker Compose still exists.
This makes migrating into Rancher 2.x as easy as ever, either from a
Rancher 1.x environment or from a standalone Docker/Compose setup.

Phased Migration into Kubernetes

If you’re a community member who is interested in Kubernetes but has
shied away from it because of the learning curve, Rancher 2.x gives you
the ability to continue doing what you’re doing with Cattle and, at your
own pace, look at and understand how that translates to Kubernetes. You
can begin deploying Kubernetes resources directly when you’re ready.

What’s New for the Kubernetes Crowd?

If you’re part of our Kubernetes user base, or if you’re a Kubernetes
user who hasn’t yet taken Rancher for a spin, we have some surprises for
you as well.

Import Existing Kubernetes Clusters

This is one of the biggest new features in Rancher 2.x. If you like the
Rancher UI but already have Kubernetes clusters deployed elsewhere, you
can now import those clusters, as-is, into Rancher’s control and begin
to manage them and interact with them via our UI and API. This feature
is great for seamlessly migrating into Rancher, or for consolidating
management of disparate clusters across your business under a single
pane of glass.

Instant HA

If you deploy the Rancher server in High Availability (HA) mode, you
instantly get HA for Kubernetes.

Full Kubernetes Access

In Rancher 1.x, you could only interact with Kubernetes via the means
that Kubernetes allows — kubectl or the Dashboard. We were
hands-off. In Rancher 2.x, you can interact with your Kubernetes
clusters vi the UI or API, or you can click the Advanced button,
grab the configuration for kubectl, and interact with them via that
means. The Kubernetes Dashboard is also available, secured behind
Rancher’s RBAC.

Compose Translation

Do you want to set up a deployment from a README that includes a sample
Compose file? In Rancher 2.x, you can take that Compose file and apply
it, and we’ll convert it into Kubernetes resources. This conversion
isn’t just a 1:1 translation of Compose directives; this is us
understanding the intended output of the Compose file and creating that
within Kubernetes.

It Really is This Awesome

I’ve been using Docker in production since 2013 and, during that time,
I’ve moved from straight Docker commands to an in-house deployment
utility that I wrote, to Docker Compose configs managed by Ansible, and
then to Rancher. Each of those stages in my progression were defined by
one requirement: the need to do more things faster and in a way that
could be automated. Rancher allows me to do 100x more than I could do
myself or with Compose, and removes the need for me to manage those
components. Over the year that I’ve been using Rancher, I’ve seen it
grow with one goal in mind: making things easy. Rancher 2.x steps up the
delivery of that goal with accomplishments that are amazing. Cattle
users still have the Cattle experience. Kubernetes users have greater
access to Kubernetes. Everyone has access to all the amazing work being
done by the community. Rancher still makes things easy and still
manages the
infrastructure

so that you can get right to deploying containers and getting work done.
I cannot wait to see where we go next.

About the Author

Adrian Goins is a
field engineer for Rancher Labs who resides in Chile and likes to put
out fires in his free time. He loves Rancher so much that he can’t
contain himself.

Tags: ,,, Category: Rancher Blog Comments closed

DockerCon EU Impressions

Friday, 20 October, 2017

I just came back from DockerCon EU. I have not met a more friendly and
helpful group of people than the users, vendors, and Docker employees at
DockerCon. It was a well-organized event and a fun
experience.

I went into the event with some
questions[
about where Docker was headed. Solomon Hykes addressed these questions
in his keynote, which was the highlight of the entire show. Docker
embracing Kubernetes is clearly the single biggest piece of news coming
out of DockerCon.

If there’s one thing
Docker wanted the attendees to take away, it was the Modernize
Traditional Applications (MTA) program. The idea of MTA is simple:
package a traditional Windows or Linux app as a Docker container then
deploy the app on modern cloud infrastructure and achieve some savings.
By dedicating half of the day one keynote and the entire day two keynote
to this topic, Docker seems to have bet its entire business on this
single value proposition.

I am surprised, however, that MTA became the sole business case focus at DockerCon. The
DockerCon attendees I talked to expected Docker to outline a more
complete vision of business opportunities for Docker. MTA did not appeal
to majority of DockerCon attendees. Even enterprise customers I met had
much bigger plans than MTA. I wish Docker had spent some time
reinforcing the value containers can deliver in transforming application
development, which is a much bigger business
opportunity.

MTA builds on the most basic capabilities of Docker as an application packaging format, a practice
that has existed since the very beginning of Docker. But what specific
features of Docker EE makes MTA work better than before? Why is Docker
as a company uniquely positioned to offer a solution for MTA? What other
tools will customers need to complete the MTA journey? The MTA keynotes
left these and many other questions unanswered.

Beyond supporting Kubernetes, Docker made
no announcements that made Swarm more likely to stay relevant. As an
ecosystem partner, I find it increasingly difficult to innovate based on
Docker’s suite of technologies. I miss the days when Docker announced
great innovations like Docker Machine, Docker Swarm, Docker Compose,
Docker network and volume plugins, and all kinds of security-related
innovations. We all used to get busy working on these technologies the
very next day. There are still plenty of innovations in container
technologies today, but the innovations are happening in the Kubernetes
and CNCF ecosystem.

After integrating Kubernetes, I hope Docker can get back to producing more innovative
technologies. I have not seen many companies who possess as much
capacity to innovate and attention to usability as Docker. I look
forward to what Docker will announce at the next
DockerCon
.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

New Research Shows Hybrid Cloud Takes Center Stage

Tuesday, 17 October, 2017

 

It’s probably not news to anyone that cloud computing has upended traditional IT and has continued to grow unabated for years.    Many commentators have suggested that the term “cloud” will disappear from our lexicon as it has become completely ubiquitous.  IT professionals must now defend their choice to not put a new workload into a cloud architecture (whether private or public).   The reasons for this are well understood:  cost savings, improved developer productivity, improved agility/innovation, and a trend towards data center consolidation.   The latest global research report SDI, Containers and DevOps – Cloud Adoption Trends Driving IT Transformation sponsored by SUSE and released today, reveals this trend to be a worldwide phenomenon, but there are some changes afoot.

Customer Cloud Strategies Are Evolving

These latest insights suggest that as companies are becoming more comfortable with cloud technologies, they are re-shaping their corporate strategies and completely transforming their IT infrastructure to take advantage of these new capabilities.   While many first experimented with public cloud, they have discovered that a portfolio of cloud capabilities is a complete necessity.   Hybrid cloud has taken center stage with 66% of respondents expecting to move more of their workloads into multi-cloud environments over the next 2 years.   Over the same timeframe, private cloud is expected to grow by 55%.    Interestingly, only 33% expected to be using more public cloud capabilities.

Why Do I Need a Hybrid Cloud Strategy?

Hybrid cloud offers the ability to place workloads on public and private infrastructure depending on where they make the most sense (financially/technically).   Constantly improving orchestration and management suites are making this non-trivial process more palatable.    Some businesses prefer to keep production workloads in their own private cloud for security, performance, reliability, or data sovereignty issues while relying on public cloud for highly variable workloads (especially development).   The latest research reveals that 43% of companies prefer private cloud while 42% prefer a hybrid approach – a fairly even split.

Where Does OpenStack Fit in Your Hybrid Strategy?

When you look at hybrid cloud strategies, the choices have narrowed significantly in the last few years.    There are proprietary solutions that tend to be expensive and less flexible.  In contrast OpenStack has become the de facto standard for private cloud deployments with 1 in 4 companies in this study having already deployed OpenStack in production and a total of 82% of the companies stating they are using or have plans to use it within the next 12 months.   This is not surprising because OpenStack can be used in all aspects of the hybrid cloud including public and private implementations to enable easy migration.   Most new technologies on the leading edge (including containers, DevOps, Kubernetes automation, network function virtualization and analytics) have solutions that are available to those adopting OpenStack as a standard.

Conclusion:   Hybrid Cloud has Moved Beyond the Hype

With a high number of companies implementing or planning to develop a hybrid cloud strategy, we’ve moved beyond the infamous ‘hype-cycle’ to real production on a massive scale.  Thousands of Open Source developers are building next-generation technology on the best open source cloud platform available.  OpenStack has become the integration engine to combine existing workloads/investments with the future of IT transformation.

No matter where you are looking to expand your cloud adoption – public or private SUSE Cloud Solutions can help you get there faster.