Moving to cloud computing, we are told, is a “journey” not an event. Enterprises are on that journey, and “hybrid cloud” is an intermediate stage – a kind of stepping stone between old-school on-premise tech and the exciting new world of public cloud where the digital unicorns live. Hybrid cloud is a “technology layover” breaking your long distance flight to an exotic IT destination.
Beyond being an intermediate step, hybrid cloud isn’t particularly well defined. If you took a random selection of three CIOs, they’d each likely explain it differently. It’s a bit like asking three people to imagine a farmyard animal: one thinks “pig”, one thinks “hen” and the other thinks “cow”. All three are right, but all three are imagining something very different. The National Institute of Standards and Technology (NIST) have given us an official hybrid cloud definition but not everyone agrees that this is that helpful. Lauren Nelson, principle analyst at Forrester, described this definition as “far from reality”. We’re at the top of the hype cycle and Nelson was making a fair point: NIST’s definition calls for active bursting from one environment into another, and while most enterprises would see themselves as hybrid, cross environment bursting is in practice nearly as rare as real unicorns.
Hybrid Cloud = Multi-Cloud
For most enterprises in fact, hybrid really means multi-cloud. They’ve got some applications and servers in the cloud, most often using both AWS and Azure, combined with applications running in their own on premise data centers. Enterprises using multiple vendors for different projects, often coming from different business units where different teams met their goals using different technologies, and where different IT disciplines have driven in opposing directions: developers who like AWS, and operations who like a standardized infrastructure that looks exactly like their existing on premise environment. There’s no sign of this mélange going away – if anything the speed at which we’re trying to do things along with competing internal imperatives and stakeholders will likely mean that the situation persists or even worsens.
Anyone who has tried to move applications and the data associated with them between environments knows it’s far from easy. And until and unless you can do this easily, hybrid is going to fall well short of the NIST definition – not least because public cloud providers are far from enthusiastic when it comes to enabling interfaces between different clouds. This lack of enthusiasm to support interoperability stems from the tech provider’s natural drive toward proprietary lock-down: making it hard for customers to change suppliers so they can continue to ratchet up costs. On the other hand enterprises attempt to avoid being locked-down so they can maintain agility and control costs.
Hybrid Cloud – the never-ending journey
Meanwhile, the volume of data we’re all storing is booming, creating budgetary pressures. Cloud storage comes with its own complicated pricing where the initial cost to store the data is inexpensive but the storage becomes very expensive when moving it across the wire. As a result we’re forced to do a whole new kind of math to work out our true storage costs. Not only will the hybrid situation persist, it may never go away. The nature of enterprises, both public and private, is that they are subject to change: merger, acquisition, closure, launch – and all of these things now have a cloud dimension. Hybrid isn’t a stepping stone at all, it is the journey, and the journey never ends.
So, if you’re going to stay on top of your infrastructure and avoid vendor lock-in, you are going to need to make applications portable between clouds and your data center. That means having an approach to storage that allows you to support and replicate your applications’ data no matter where they are – that can bridge between your core data center and the cloud. You’re going to need an approach with minimal disruption in terms of deployment and end user training – and you are going to want to be able to do this at a price that is affordable.
Why Software-Defined Storage?
Open source software-defined storage is a great fit for hybrid and multi-cloud because it provides a unified and centralized storage management solution that extends from your data center to the cloud. It provides a way to ensure compatibility between your core data center storage and cloud storage while letting you use a single tool for deployment and management, helping to minimize disruptions.
Software-defined storage is particularly useful in hybrid cloud environments when used for cloud backups. With the ability to easily backup application data leveraging low cost off-site public cloud resources and improving data protection. You can also scale your storage environment without increasing CAPEX, by quickly and seamlessly adding cloud resources to your storage infrastructure when needed.
SUSE helps our customers meet their hybrid and multi-cloud storage needs with intelligent open source software-defined storage solutions, powered by Ceph. Enabling you to transform your storage infrastructure to manage growth and complexity ensuring your applications have access to the storage they need, when and where they need it. To learn more about how SUSE can help you manage the data explosion, visit suse.com/programs/data-explosion/