Updates ohne Downtime: Wie die Salzgitter AG von SUSE Linux Enterprise Live Patching profitiert

Montag, 3 Juni, 2019

In einer 24-Stunden-Stahlproduktion gibt es keinen günstigen Zeitpunkt für eine IT-Unterbrechung. Der Salzgitter-Konzern nutzt daher für den Betrieb seiner geschäftskritischen SAP-Anwendungen SUSE Linux Enterprise Live Patching. Systemadministratoren des IT-Dienstleisters GESIS können damit Security-Patches für den Linux-Kernel installieren, ohne die Systeme offline nehmen zu müssen. So maximiert die Salzgitter-Gruppe nicht nur die Sicherheit ihrer Infrastruktur, sie vermeidet auch bis zu 125 Stunden kumulierte Downtime – bei jedem einzelnen Patch.

Der globale Markt für Stahlprodukte wächst ungebrochen – getrieben vor allem durch die Entwicklung in vielen Schwellenländern. Die anziehende Industrieproduktion, die zunehmende Urbanisierung und ambitionierte Infrastrukturprojekte in Ländern wie Brasilien und Russland lassen die Nachfrage nach Stahlerzeugnissen seit Jahren ansteigen.

Für die Salzgitter AG, einen der größten Stahlproduzenten Europas, sind das gute Nachrichten. Die Unternehmensgruppe konnte in den letzten Jahren ihren Umsatz auf neun Milliarden Euro steigern und beschäftigt heute mehr als 25.000 Mitarbeiter. Bis 2021 rechnet Salzgitter mit einem weiteren Umsatzwachstum von über 250 Millionen Euro jährlich.

Credits: Salzgitter AG

Herausforderung: Weltweit boomendes Business lässt wenig Raum für Wartungsfenster

Mit seinem globalen Produktions- und Servicenetz hat der Salzgitter-Konzern deutliche Vorteile gegenüber seinen Wettbewerbern. Die Gruppe verfügt über Produktionsstätten auf der ganzen Welt, unter anderem in Deutschland, den USA, Mexiko, Brasilien, Indien und China. Um die Produktionslinien rund um die Uhr am Laufen zu halten und Endprodukte ohne Verzögerung zum Kunden zu bringen, nutzt das Unternehmen leistungsfähige IT-Systeme. Mit Hilfe geschäftskritischer SAP-Anwendungen steuert Salzgitter heute seine hochautomatisierten und eng vernetzten Produktions- und Logistikprozesse.

Thomas Lowin, Server Operations Lead bei GESIS, dem internen IT-Dienstleister der Salzgitter AG, sagt: „Für die Unterstützung der 24/7-Produktion ist der Salzgitter-Konzern auf die ständige Verfügbarkeit und den robusten Service seiner IT-Systeme angewiesen. Jede Minute ungeplanter Ausfallzeit würde tausende von Euro kosten und komplexe Produktionspläne für verschiedene Stahlprodukte gefährden.“

Daher reduzierte Salzgitter in den letzten Jahren seine Wartungsfenster auf ein Minimum. Einige besonders kritische Anwendungen wurden zuletzt nur noch einmal pro Jahr für Wartungsarbeiten offline genommen. Bekannte Probleme konnten so oft über einen längeren Zeitpunkt nicht behoben werden. Zudem war es schwierig, mit den wachsenden Security-Anforderungen Schritt zu halten: „Angesichts der zunehmenden Cyberkriminalität in den letzten Jahren ist es wichtiger denn je, Software regelmäßig mit Patches zu aktualisieren, um einen maximalen Schutz vor externen Bedrohungen zu gewährleisten“, so Thomas Lowin.

Die Lösung: SUSE Linux Enterprise Live Patching

Der IT-Dienstleister des Salzgitter-Konzerns wandte sich in dieser Situation an SUSE, einen seiner wichtigsten Technologiepartner. GESIS setzt seit einigen Jahren beim Betrieb von SAP ERP-Anwendungen und SAP HANA-Datenbanken auf SUSE Linux Enterprise Server for SAP Applications. „Als wir herausfanden, dass SUSE eine ergänzende Live-Patching-Lösung entwickelt hat, waren wir sehr daran interessiert, sie zu testen“, berichtet Thomas Lowin.

SUSE Linux Enterprise Live Patching ermöglicht den Systemadministratoren, Linux-Kernel-Fixes ohne eine Unterbrechung des Dienstes und ohne den Neustart von Systemen zu installieren. So lassen sich Geschäftsanwendungen auch bei kritischen Sicherheits-Updates des Kernels am Laufen halten. SUSE Linux Enterprise Live Patching basiert auf der kGraft Linux-Kernel-Technologie, die SUSE gemeinsam mit der Linux-Community entwickelt hat.

GESIS arbeitete mit einem Team von SUSE zusammen, um die Möglichkeiten von SUSE Linux Enterprise Live Patching kennenzulernen. Die nahtlose Integration mit der bestehenden SUSE Manager-Lösung des Unternehmens vereinfachte dann die Implementierung auf allen Produktionssystemen.

Nicolas Otten, Systemanalytiker und Ingenieur bei GESIS, erinnert sich: „Wir waren angenehm überrascht, wie schnell und einfach es war, ein Live-Kernel-Patching für SUSE Linux Enterprise Server durchzuführen. Wir konnten alles in wenigen Stunden zum Laufen bringen. Wie üblich war die Unterstützung, die wir von SUSE erhielten, außergewöhnlich.“

Ergebnis: Maximale Verfügbarkeit bei verbesserter Systemsicherheit

Mit SUSE Linux Enterprise Live Patching hat GESIS die Notwendigkeit von Systemausfällen nahezu eliminiert. Heute kann das Unternehmen Software-Updates und Sicherheits-Patches nach Bedarf und ohne Unterbrechung des Betriebs bereitstellen. Durch den Wegfall von Wartungsfenstern lassen sich bei jedem wichtigen Patch etwa 125 Stunden kumulierter Ausfallzeiten über alle SAP-Anwendungen vermeiden.

Die IT-Spezialisten können das Security-Patching heute während der Geschäftszeiten durchführen und müssen sich keine Wochenenden mehr dafür freihalten. Die rund 20.000 IT-Anwender bei Salzgitter bemerken von den Updates überhaupt nichts mehr und können ohne Unterbrechung weiterarbeiten. GESIS schätzt, dass sich der Kommunikations- und Verwaltungsaufwand für das Patching dadurch um rund 50 Prozent reduzierte.

„Entscheidend ist, dass wir Systeme viel häufiger patchen können als bisher und so dazu beitragen, die Sicherheit der Systeme zu gewährleisten“, resümiert Niolas Olten. „Jetzt patchen wir in der Regel alle sechs bis acht Wochen alle Systeme. Es ist auch beruhigend zu wissen, dass wir in der Lage sind, eine Sicherheitsschwachstelle sofort zu patchen, wenn sie auftritt.“ GESIS schafft damit auch eine wichtige Grundvoraussetzung für Zertifizierungen des IT-Betriebs: Die Möglichkeit, schneller zu patchen, erleichtert unter anderem die Einhaltung der internationalen Informationssicherheitsnorm ISO 27001.

Laden Sie sich jetzt den ganzen Praxisbericht herunter und erfahren Sie mehr über den Einsatz von SUSE-Lösungen bei der Salzgitter AG

IDC-Studie Data Center Trends 2019: Wie Unternehmen ihr Rechenzentrum fit für die Zukunft machen

Freitag, 3 Mai, 2019

Was sind aktuell die wichtigsten Herausforderungen und Erfolgsfaktoren bei der Data Center-Modernisierung? Das wollten die Marktanalysten von IDC herausfinden und haben dazu IT- und Fachentscheider aus 210 deutschen Unternehmen befragt. 

Höhere Produktivität, Konsolidierung von Ressourcen, verbesserte Sicherheit und Compliance – das sind für Unternehmen heute die wichtigsten Gründe, um in neue Data Center-Infrastrukturen zu investieren. Aber wie lassen sich diese Ziele in der Praxis tatsächlich erreichen? Und was muss ein modernes Rechenzentrum leisten, um die veränderten Business-Anforderungen optimal zu unterstützen?

Die Experten von IDC stellten diese Fragen insgesamt 210 Verantwortlichen aus Unternehmen mit mehr als 500 Mitarbeitern in Deutschland. Die Ergebnisse der Studie sind jetzt verfügbar und liefern interessante Einblicke in aktuelle Data Center-Strategien und Umsetzungspläne.

Fünf Ratschläge für bessere Data Center

Aus den Ergebnissen der Studie leiteten die Experten konkrete Handlungsempfehlungen ab. Sie raten Unternehmen grundsätzlich, nicht auf kurzlebige Trends zu setzen, sondern neue Technologien intelligent im Data Center zu einführen. Dazu gehört auch, die Betriebsprozesse zu optimieren und die Geschäftsziele immer im Auge zu behalten.

Vor allem fünf Tipps geben die Analysten Entscheidern mit auf den Weg:

  • Stellen Sie Ihre Data Center-Strategie und Ihre IT-Architektur auf den Prüfstand: Verantwortliche sollten sich ehrlich fragen, ob die bisherige Strategie noch zum Unternehmen passt. Cloud-Plattformen, Colocation-Modelle und der Trend zum Edge Computing eröffnen viele neue Optionen für die Zukunft.
  • Evaluieren Sie SDI: Der Virtualisierungsgrad in vielen Rechenzentren ist bereits heute sehr hoch. Häufig lohnt es sich jedoch, noch einen Schritt weiter in Richtung Software-Defined Infrastructure zu gehen und Workloads vollständig von der Hardware zu entkoppeln.
  • Prüfen Sie einen Cloud First Approach: Traditionelle Data Center-Infrastrukturen können mit datenintensiven Anwendungen wie Analytics, Machine Learning und Business Intelligence oft nicht mehr Schritt halten. Gerade diese Bereiche eignen sich daher gut für den Einstieg in eine Cloud-First-Strategie.
  • Automatisieren, integrieren und orchestrieren Sie Ihre IT-Landschaft umfassend: In einem modernen Data Center laufen Management-Prozesse weitgehend automatisiert ab. Verantwortliche sollten dabei über die Verwaltung der Infrastruktur hinausgehen und agile DevOps-Methoden in den Betrieb einbringen.
  • Setzen Sie Open Source gezielt im Data Center ein: Die Studie zeigt, dass bereits mehr als drei Viertel der Befragten eine oder mehrere Open-Source-Lösungen im Data Center einsetzen. Nach Ansicht von IDC ist dies der richtige Weg: Open Source Software bildet heute in vielen Fällen die Basis für innovative Anwendungen.

 

Containertechnologien im Aufwind

Eine interessante Erkenntnis der Data Center-Studie 2019: Container-Technologien sind bereits bei 85 Prozent der Befragten im Einsatz bzw. werden derzeit evaluiert. Für die IDC-Experten kommt dies nicht überraschend. Die Analysten attestieren der Container-Technologie „im Kontext von Automatisierung und im Zusammenspiel mit DevOps einen deutlichen Mehrwert“. Allerdings sei die Nutzung von Containern kein Selbstläufer – nicht alle Unternehmen verfügten bereits über eine optimale Infrastruktur für die Nutzung von Containern.

SUSE adressiert genau diese Herausforderung mit der SUSE CaaS Platform, einer Container-Verwaltungsplattform für Geschäftsanwendungen. IT- und DevOps-Experten sind damit in der Lage, Container-basierte Applikationen und Services leichter bereitzustellen, zu managen und zu skalieren. SUSE CaaS enthält Kubernetes für die automatisierte Verwaltung von containerisierten Anwendungen über ihren gesamten Lebenszyklus hinweg. Darüber hinaus bietet die Plattform zusätzliche Funktionen wie eine vollständige Container-Ausführungsumgebung und Tools zur Data Center-Integration. Damit lässt sich Kubernetes sehr einfach an neue oder bestehende Infrastrukturen, Systeme und Prozesse anschließen.

Wenn Sie mehr über die Data Center Trends 2019 und die IDC-Empfehlungen für eine zukunftsfähige Strategie erfahren möchten, laden Sie sich jetzt die Studie herunter.

Sicher digitalisieren im Mittelstand: In unserem Whitepaper beantworten Experten zehn der wichtigsten Fragen

Mittwoch, 17 April, 2019

Die Digitalisierung nimmt auch in mittelständischen Unternehmen Fahrt auf. Entscheider haben mittlerweile erkannt, welches große Potenzial in digitalen Innovationen wie KI und IoT steckt. Zur konkreten Umsetzung haben die Verantwortlichen aber noch viele Fragen. Wir haben daher Experten und Vordenker um ihre Einschätzung gebeten. Ihre Antworten auf zehn Schlüsselfragen lesen Sie in unserem neuen Whitepaper.

Eine Frage, die derzeit in vielen Organisationen leidenschaftlich diskutiert wird, ist beispielsweise die richtige strategische Herangehensweise an das Thema Digitalisierung. Sollten Unternehmen mit überschaubaren Projekten starten oder gleich aufs große Ganze zielen? Christoph Maier, Geschäftsführer der Thomas Krenn AG, empfiehlt Verantwortlichen, auf jeden Fall Schritt für Schritt vorzugehen: „So wird die Transformation nicht nur einfacher, sondern verläuft insgesamt positiver – kleinere Projekte erzeugen „Quick Wins“ und damit auch Verständnis bei den Mitarbeitern.“

Auch Michael C. Reiserer, Berater beim Start-Up ApiOmat, rät dazu, klein anzufangen und den Erfolg konsequent zu messen: „Wenn etwas nicht erfolgreich ist, muss man auch bereit sein, wieder einen Schritt zurückzugehen.“ Wichtig sei jedoch, die große Vision nicht aus den Augen zu verlieren. „Unternehmer müssen sich selbst ehrlich fragen: „Bin ich bereit, auf Hierarchien zu verzichten? Bin ich bereit, zuzuhören und Kunden und Partner einzubinden?“ An diesen Indikatoren kann man erkennen, ob man als Unternehmen wirklich den Willen hat, zu digitalisieren“, so Michael C. Reiserer.

Ein Schlüsselfaktor für den Erfolg einer Digitalisierungsstrategie ist nach Ansicht der Experten das Mindset der handelnden Personen: „Grundsätzlich gilt: Ohne Mitarbeiter keine Digitalisierung“, sagt Dr. Kay Müller-Jones, Head of Consulting & Services Integration bei Tata Consulting Services. „Es gilt also, die Mitarbeiter zu motivieren und zu integrieren. Change Management ist damit eine wichtige Kompetenz für Unternehmen.“

Von der innovativen Idee zum digitalen Geschäftsmodell

Der digitale Wandel beginnt also im Kopf und auf der organisatorischen Ebene. Wie lassen sich damit aber nun echte Mehrwerte schaffen? Jürgen Bähr, Geschäftsführer bei G+H, beschreibt dies anhand eines einfachen Beispiels aus der Praxis. Ein Anbieter von Facility Management hatte Gebäudeschäden bisher ganz klassisch mit Stift und Papier erfasst. „Unser Team für individuelle Anwendungsentwicklung und Digitalisierung hat dem Kunden eine App entwickelt, die Hausmeister jetzt überall nutzen können“, berichtet Bähr. „Sie fotografieren Schäden mit dem Smartphone, schreiben einen kurzen Bericht direkt in der App. Die Störung ist sofort zentral erfasst.“

Im nächsten Schritt geht es dann darum, die innovativen Ideen in neue Geschäftsmodelle zu verwandeln. Dabei stellt sich zunächst die Frage, wie sich digitale Wertschöpfung überhaupt quantifizieren lässt. Peter Weisbach, Executive NEXT beim IT-Dienstleister Bechtle, nennt als wichtigste Messgrößen den Automatisierungsgrad, die Auswertungsmöglichkeiten der gewonnenen Daten und nicht zuletzt die Skalierbarkeit: „Wenn ich heute eine Dienstleistung mit einem Mitarbeiter bereitstellen kann und morgen die gleiche Dienstleistung über ein Portal x-fach skalierbar ist, dann betreibe ich messbare Wertschöpfung.“

Mehr Impulse und Ideen für eine erfolgreiche Digitalisierungsstrategie finden Sie in unserem Whitepaper. Darin erfahren Sie unter anderem:

  • auf welche Kompetenzen es bei der Digitalisierung im Mittelstand wirklich ankommt,
  • wo die größten Unterschiede zwischen digitalem und analogem Business liegen,
  • warum der digitale Wandel oft Lücken zwischen Legacy-Systemen und innovativen Technologien offenbart,
  • wie Enterprise Open Source die Gräben zwischen unterschiedlichen IT-Welten schließen kann,
  • welche sieben Handlungsempfehlungen die Digitalisierungsexperten Unternehmen mit auf den Weg geben.


Zum Download des Whitepapers „Sicher digitalisieren im Mittelstand“

An Introduction to Big Data Concepts

Mittwoch, 27 März, 2019

Gigantic amounts of data are being generated at high speeds by a variety of sources such as mobile devices, social media, machine logs, and multiple sensors surrounding us. All around the world, we produce vast amount of data and the volume of generated data is growing exponentially at a unprecedented rate. The pace of data generation is even being accelerated by the growth of new technologies and paradigms such as Internet of Things (IoT).

What is Big Data and How Is It Changing?

The definition of big data is hidden in the dimensions of the data. Data sets are considered “big data” if they have a high degree of the following three distinct dimensions: volume, velocity, and variety. Value and veracity are two other “V” dimensions that have been added to the big data literature in the recent years. Additional Vs are frequently proposed, but these five Vs are widely accepted by the community and can be described as follows:

  • Velocity: the speed at which the data is been generated
  • Volume: the amount of the data that is been generated
  • Variety: the diversity or different types of the data
  • Value: the worth of the data or the value it has
  • Veracity: the quality, accuracy, or trustworthiness of the data

Large volumes of data are generally available in either structured or unstructured formats. Structured data can be generated by machines or humans, has a specific schema or model, and is usually stored in databases. Structured data is organized around schemas with clearly defined data types. Numbers, date time, and strings are a few examples of structured data that may be stored in database columns. Alternatively, unstructured data does not have a predefined schema or model. Text files, log files, social media posts, mobile data, and media are all examples of unstructured data.

Based on a report provided by Gartner, an international research and consulting organization, the application of advanced big data analytics is part of the Gartner Top 10 Strategic Technology Trends for 2019, and is expected to drive new business opportunities. The same report also predicts that more than 40% of data science tasks will be automated by 2020, which will likely require new big data tools and paradigms.

By 2017, global internet usage reached 47% of the world’s population based on an infographic provided by DOMO. This indicates that an increasing number of people are starting to use mobile phones and that more and more devices are being connected to each other via smart cities, wearable devices, Internet of Things (IoT), fog computing, and edge computing paradigms. As internet usage spikes and other technologies such as social media, IoT devices, mobile phones, autonomous devices (e.g. robotics, drones, vehicles, appliances, etc) continue to grow, our lives will become more connected than ever and generate unprecedented amounts of data, all of which will require new technologies for processing.

The Scale of Data Generated by Everyday Interactions

At a large scale, the data generated by everyday interactions is staggering. Based on research conducted by DOMO, for every minute in 2018, Google conducted 3,877,140 searches, YouTube users watched 4,333,560 videos, Twitter users sent 473,400 tweets, Instagram users posted 49,380 photos, Netflix users streamed 97,222 hours of video, and Amazon shipped 1,111 packages. This is just a small glimpse of a much larger picture involving other sources of big data. It seems like the internet is pretty busy, does not it? Moreover, it is expected that mobile traffic will experience tremendous growth past its present numbers and that the world’s internet population is growing significantly year-over-year. By 2020, the report anticipates that 1.7MB of data will be created per person per second. Big data is getting even bigger.

At small scale, the data generated on a daily basis by a small business, a start up company, or a single sensor such as a surveillance camera is also huge. For example, a typical IP camera in a surveillance system at a shopping mall or a university campus generates 15 frame per second and requires roughly 100 GB of storage per day. Consider the storage amount and computing requirements if those camera numbers are scaled to tens or hundreds.

Big Data in the Scientific Community

Scientific projects such as CERN, which conducts research on what the universe is made of, also generate massive amounts of data. The Large Hadron Collider (LHC) at CERN is the world’s largest and most powerful particle accelerator. It consists of a 27-kilometer ring of superconducting magnets along with some additional structures to accelerate and boost the energy of particles along the way.

During the spin, particles collide with LHC detectors roughly 1 billion times per second, which generates around 1 petabyte of raw digital “collision event” data per second. This unprecedented volume of data is a great challenge that cannot be resolved with CERN’s current infrastructure. To work around this, the generated raw data is filtered and only the “important” events are processed to reduce the volume of data. Consider the challenging processing requirements for this task.

The four big LHC experiments, named ALICE, ATLAS, CMS, and LHCb, are among the biggest generators of data at CERN, and the rate of the data processed and stored on servers by these experiments is expected to reach about 25 GB/s (gigabyte per second). As of June 29, 2017, the CERN Data Center announced that they had passed the 200 petabytes milestone of data archived permanently in their storage units.

Why Big Data Tools are Required

The scale of the data generated by famous well-known corporations, small scale organizations, and scientific projects is growing at an unprecedented level. This can be clearly seen by the above scenarios and by remembering again that the scale of this data is getting even bigger.

On the one hand, the mountain of the data generated presents tremendous processing, storage, and analytics challenges that need to be carefully considered and handled. On the other hand, traditional Relational Database Management Systems (RDBMS) and data processing tools are not sufficient to manage this massive amount of data efficiently when the scale of data reaches terabytes or petabytes. These tools lack the ability to handle large volumes of data efficiently at scale. Fortunately, big data tools and paradigms such as Hadoop and MapReduce are available to resolve these big data challenges.

Analyzing big data and gaining insights from it can help organizations make smart business decisions and improve their operations. This can be done by uncovering hidden patterns in the data and using them to reduce operational costs and increase profits. Because of this, big data analytics plays a crucial role for many domains such as healthcare, manufacturing, and banking by resolving data challenges and enabling them to move faster.

Big Data Analytics Tools

Since the compute, storage, and network requirements for working with large data sets are beyond the limits of a single computer, there is a need for paradigms and tools to crunch and process data through clusters of computers in a distributed fashion. More and more computing power and massive storage infrastructure are required for processing this massive data either on-premise or, more typically, at the data centers of cloud service providers.

In addition to the required infrastructure, various tools and components must be brought together to solve big data problems. The Hadoop ecosystem is just one of the platforms helping us work with massive amounts of data and discover useful patterns for businesses.

Below is a list of some of the tools available and a description of their roles in processing big data:

  • MapReduce: MapReduce is a distributed computing paradigm developed to process vast amount of data in parallel by splitting a big task into smaller map and reduce oriented tasks.
  • HDFS: The Hadoop Distributed File System is a distributed storage and file system used by Hadoop applications.
  • YARN: The resource management and job scheduling component in the Hadoop ecosystem.
  • Spark: A real-time in-memory data processing framework.
  • PIG/HIVE: SQL-like scripting and querying tools for data processing and simplifying the complexity of MapReduce programs.
  • HBase, MongoDB, Elasticsearch: Examples of a few NoSQL databases.
  • Mahout, Spark ML: Tools for running scalable machine learning algorithms in a distributed fashion.
  • Flume, Sqoop, Logstash: Data integration and ingestion of structured and unstructured data.
  • Kibana: A tool to visualize Elasticsearch data.

Conclusion

To summarize, we are generating a massive amount of data in our everyday life, and that number is continuing to rise. Having the data alone does not improve an organization without analyzing and discovering its value for business intelligence. It is not possible to mine and process this mountain of data with traditional tools, so we use big data pipelines to help us ingest, process, analyze, and visualize these tremendous amounts of data.

Learn to deploy databases in production on Kubernetes

For more training in big data and database management, watch our free online training on successfully running a database in production on kubernetes.

Tags: ,,, Category: Allgemein Comments closed

Considerations When Designing Distributed Systems

Montag, 11 März, 2019

Introduction

Today’s applications are marvels of distributed systems development. Each function or service that makes up
an application may be executing on a different system, based upon a different system architecture, that is
housed in a different geographical location, and written in a different computer language. Components of
today’s applications might be hosted on a powerful system carried in the owner’s pocket and communicating
with application components or services that are replicated in data centers all over the world.

What’s amazing about this, is that individuals using these applications typically are not aware of the
complex environment that responds to their request for the local time, local weather, or for directions to
their hotel.

Let’s pull back the curtain and look at the industrial sorcery that makes this all possible and contemplate
the thoughts and guidelines developers should keep in mind when working with this complexity.

The Evolution of System Design

Designing Distributed

Figure 1: Evolution of system design over time

Source: Interaction Design Foundation, The
Social Design of Technical Systems: Building technologies for communities

Application development has come a long way from the time that programmers wrote out applications, hand
compiled them into the language of the machine they were using, and then entered individual machine
instructions and data directly into the computer’s memory using toggle switches.

As processors became more and more powerful, system memory and online storage capacity increased, and
computer networking capability dramatically increased, approaches to development also changed. Data can now
be transmitted from one side of the planet to the other faster than it used to be possible for early
machines to move data from system memory into the processor itself!

Let’s look at a few highlights of this amazing transformation.

Monolithic Design

Early computer programs were based upon a monolithic design with all of the application components were
architected to execute on a single machine. This meant that functions such as the user interface (if users
were actually able to interact with the program), application rules processing, data management, storage
management, and network management (if the computer was connected to a computer network) were all contained
within the program.

While simpler to write, these programs become increasingly complex, difficult to document, and hard to update
or change. At this time, the machines themselves represented the biggest cost to the enterprise and so
applications were designed to make the best possible use of the machines.

Client/Server Architecture

As processors became more powerful, system and online storage capacity increased, and data communications
became faster and more cost-efficient, application design evolved to match pace. Application logic was
refactored or decomposed, allowing each to execute on different machines and the ever-improving networking
was inserted between the components. This allowed some functions to migrate to the lowest cost computing
environment available at the time. The evolution flowed through the following stages:

Terminals and Terminal Emulation

Early distributed computing relied on special-purpose user access devices called terminals. Applications had
to understand the communications protocols they used and issue commands directly to the devices. When
inexpensive personal computing (PC) devices emerged, the terminals were replaced by PCs running a terminal
emulation program.

At this point, all of the components of the application were still hosted on a single mainframe or
minicomputer.

Light Client

As PCs became more powerful, supported larger internal and online storage, and network performance increased,
enterprises segmented or factored their applications so that the user interface was extracted and executed
on a local PC. The rest of the application continued to execute on a system in the data center.

Often these PCs were less costly than the terminals that they replaced. They also offered additional
benefits. These PCs were multi-functional devices. They could run office productivity applications that
weren’t available on the terminals they replaced. This combination drove enterprises to move to
client/server application architectures when they updated or refreshed their applications.

Midrange Client

PC evolution continued at a rapid pace. Once more powerful systems with larger storage capacities were
available, enterprises took advantage of them by moving even more processing away from the expensive systems
in the data center out to the inexpensive systems on users’ desks. At this point, the user interface and
some of the computing tasks were migrated to the local PC.

This allowed the mainframes and minicomputers (now called servers) to have a longer useful life, thus
lowering the overall cost of computing for the enterprise.

Heavy client

As PCs become more and more powerful, more application functions were migrated from the backend servers. At
this point, everything but data and storage management functions had been migrated.

Enter the Internet and the World Wide Web

The public internet and the World Wide Web emerged at this time. Client/server computing continued to be
used. In an attempt to lower overall costs, some enterprises began to re-architect their distributed
applications so they could use standard internet protocols to communicate and substituted a web browser for
the custom user interface function. Later, some of the application functions were rewritten in Javascript so
that they could execute locally on the client’s computer.

Server Improvements

Industry innovation wasn’t focused solely on the user side of the communications link. A great deal of
improvement was made to the servers as well. Enterprises began to harness together the power of many
smaller, less expensive industry standard servers to support some or all of their mainframe-based functions.
This allowed them to reduce the number of expensive mainframe systems they deployed.

Soon, remote PCs were communicating with a number of servers, each supporting their own component of the
application. Special-purpose database and file servers were adopted into the environment. Later, other
application functions were migrated into application servers.

Networking was another area of intense industry focus. Enterprises began using special-purpose networking
servers that provided fire walls and other security functions, file caching functions to accelerate data
access for their applications, email servers, web servers, web application servers, distributed name servers
that kept track of and controlled user credentials for data and application access. The list of networking
services that has been encapsulated in an appliance server grows all the time.

Object-Oriented Development

The rapid change in PC and server capabilities combined with the dramatic price reduction for processing
power, memory and networking had a significant impact on application development. No longer where hardware
and software the biggest IT costs. The largest costs were communications, IT services (the staff), power,
and cooling.

Software development, maintenance, and IT operations took on a new importance and the development process was
changed to reflect the new reality that systems were cheap and people, communications, and power were
increasingly expensive.

Designing Distributed

Figure 2: Worldwide IT spending forcast

Source: Gartner Worldwide IT
Spending Forecast, Q1 2018

Enterprises looked to improved data and application architectures as a way to make the best use of their
staff. Object-oriented applications and development approaches were the result. Many programming languages
such as the following supported this approach:

  • C++
  • C#
  • COBOL
  • Java
  • PHP
  • Python
  • Ruby

Application developers were forced to adapt by becoming more systematic when defining and documenting data
structures. This approach also made maintaining and enhancing applications easier.

Open-Source Software

Opensource.com offers the following definition for open-source
software: “Open source software is software with source code that anyone can inspect, modify, and enhance.”
It goes on to say that, “some software has source code that only the person, team, or organization who
created it — and maintains exclusive control over it — can modify. People call this kind of software
‘proprietary’ or ‘closed source’ software.”

Only the original authors of proprietary software can legally copy, inspect, and alter that software. And in
order to use proprietary software, computer users must agree (often by accepting a license displayed the
first time they run this software) that they will not do anything with the software that the software’s
authors have not expressly permitted. Microsoft Office and Adobe Photoshop are examples of proprietary
software.

Although open-source software has been around since the very early days of computing, it came to the
forefront in the 1990s when complete open-source operating systems, virtualization technology, development
tools, database engines, and other important functions became available. Open-source technology is often a
critical component of web-based and distributed computing. Among others, the open-source offerings in the
following categories are popular today:

  • Development tools
  • Application support
  • Databases (flat file, SQL, No-SQL, and in-memory)
  • Distributed file systems
  • Message passing/queueing
  • Operating systems
  • Clustering

Distributed Computing

The combination of powerful systems, fast networks, and the availability of sophisticated software has driven
major application development away from monolithic towards more highly distributed approaches. Enterprises
have learned, however, that sometimes it is better to start over than to try to refactor or decompose an
older application.

When enterprises undertake the effort to create distributed applications, they often discover a few pleasant
side effects. A properly designed application, that has been decomposed into separate functions or services,
can be developed by separate teams in parallel.

Rapid application development and deployment, also known as DevOps, emerged as a way to take advantage of the
new environment.

Service-Oriented Architectures

As the industry evolved beyond client/server computing models to an even more distributed approach, the
phrase “service-oriented architecture” emerged. This approach was built on distributed systems concepts,
standards in message queuing and delivery, and XML messaging as a standard approach to sharing data and data
definitions.

Individual application functions are repackaged as network-oriented services that receive a message
requesting they perform a specific service, they perform that service, and then the response is sent back to
the function that requested the service.

This approach offers another benefit, the ability for a given service to be hosted in multiple places around
the network. This offers both improved overall performance and improved reliability.

Workload management tools were developed that receive requests for a service, review the available capacity,
forward the request to the service with the most available capacity, and then send the response back to the
requester. If a specific service doesn’t respond in a timely fashion, the workload manager simply forwards
the request to another instance of the service. It would also mark the service that didn’t respond as failed
and wouldn’t send additional requests to it until it received a message indicating that it was still alive
and healthy.

What Are the Considerations for Distributed Systems

Now that we’ve walked through over 50 years of computing history, let’s consider some rules of thumb for
developers of distributed systems. There’s a lot to think about because a distributed solution is likely to
have components or services executing in many places, on different types of systems, and messages must be
passed back and forth to perform work. Care and consideration are absolute requirements to be successful
creating these solutions. Expertise must also be available for each type of host system, development tool,
and messaging system in use.

Nailing Down What Needs to Be Done

One of the first things to consider is what needs to be accomplished! While this sounds simple, it’s
incredibly important.

It’s amazing how many developers start building things before they know, in detail, what is needed. Often,
this means that they build unnecessary functions and waste their time. To quote Yogi Berra, “if you don’t
know where you are going, you’ll end up someplace else.”

A good place to start is knowing what needs to be done, what tools and services are already available, and
what people using the final solution should see.

Interactive Versus Batch

Since fast responses and low latency are often requirements, it would be wise to consider what should be done
while the user is waiting and what can be put into a batch process that executes on an event-driven or
time-driven schedule.

After the initial segmentation of functions has been considered, it is wise to plan when background, batch
processes need to execute, what data do these functions manipulate, and how to make sure these functions are
reliable, are available when needed, and how to prevent the loss of data.

Where Should Functions Be Hosted?

Only after the “what” has been planned in fine detail, should the “where” and “how” be considered. Developers
have their favorite tools and approaches and often will invoke them even if they might not be the best
choice. As Bernard Baruch was reported to say, “if all you have is a hammer, everything looks like a nail.”

It is also important to be aware of corporate standards for enterprise development. It isn’t wise to select a
tool simply because it is popular at the moment. That tool just might do the job, but remember that
everything that is built must be maintained. If you build something that only you can understand or
maintain, you may just have tied yourself to that function for the rest of your career. I have personally
created functions that worked properly and were small and reliable. I received telephone calls regarding
these for ten years after I left that company because later developers could not understand how the
functions were implemented. The documentation I wrote had been lost long earlier.

Each function or service should be considered separately in a distributed solution. Should the function be
executed in an enterprise data center, in the data center of a cloud services provider or, perhaps, in both.
Consider that there are regulatory requirements in some industries that direct the selection of where and
how data must be maintained and stored.

Other considerations include:

  • What type of system should be the host of that function. Is one system architecture better for that
    function? Should the system be based upon ARM, X86, SPARC, Precision, Power, or even be a Mainframe?
  • Does a specific operating system provide a better computing environment for this function? Would Linux,
    Windows, UNIX, System I, or even System Z be a better platform?
  • Is a specific development language better for that function? Is a specific type of data management tool?
    Is a Flat File, SQL database, No-SQL database, or a non-structured storage mechanism better?
  • Should the function be hosted in a virtual machine or a container to facilitate function mobility,
    automation and orchestration?

Virtual machines executing Windows or Linux were frequently the choice in the early 2000s. While they offered
significant isolation for functions and made it easily possible to restart or move them when necessary,
their processing, memory and storage requirements were rather high. Containers, another approach to
processing virtualization, are the emerging choice today because they offer similar levels of isolation, the
ability to restart and migrate functions and consume far less processing power, memory or storage.

Performance

Performance is another critical consideration. While defining the functions or services that make up a
solution, the developers should be aware if they have significant processing, memory or storage
requirements. It might be wise to look at these functions closely to learn if that can be further subdivided
or decomposed.

Further segmentation would allow an increase in parallelization which would potentially offer performance
improvements. The trade off, of course, is that this approach also increases complexity and, potentially,
makes them harder to manage and to make secure.

Reliability

In high stakes enterprise environments, solution reliability is essential. The developer must consider when
it is acceptable to force people to re-enter data, re-run a function, or when a function can be unavailable.

Database developers ran into this issue in the 1960s and developed the concept of an atomic function. That
is, the function must complete or the partial updates must be rolled back leaving the data in the state it
was in before the function began. This same mindset must be applied to distributed systems to ensure that
data integrity is maintained even in the event of service failures and transaction disruptions.

Functions must be designed to totally complete or roll back intermediate updates. In critical message passing
systems, messages must be stored until an acknowledgement that a message has been received comes in. If such
a message isn’t received, the original message must be resent and a failure must be reported to the
management system.

Manageability

Although not as much fun to consider as the core application functionality, manageability is a key factor in
the ongoing success of the application. All distributed functions must be fully instrumented to allow
administrators to both understand the current state of each function and to change function parameters if
needed. Distributed systems, after all, are constructed of many more moving parts than the monolithic
systems they replace. Developers must be constantly aware of making this distributed computing environment
easy to use and maintain.

This brings us to the absolute requirement that all distributed functions must be fully instrumented to allow
administrators to understand their current state. After all, distributed systems are inherently more complex
and have more moving parts than the monolithic systems they replace.

Security

Distributed system security is an order of magnitude more difficult than security in a monolithic
environment. Each function must be made secure separately and the communication links between and among the
functions must also be made secure. As the network grows in size and complexity, developers must consider
how to control access to functions, how to make sure than only authorized users can access these function,
and to to isolate services from one other.

Security is a critical element that must be built into every function, not added on later. Unauthorized
access to functions and data must be prevented and reported.

Privacy

Privacy is the subject of an increasing number of regulations around the world. Examples like the European
Union’s GDPR and the U.S. HIPPA regulations are important considerations for any developer of
customer-facing systems.

Mastering Complexity

Developers must take the time to consider how all of the pieces of a complex computing environment fit
together. It is hard to maintain the discipline that a service should encapsulate a single function or,
perhaps, a small number of tightly interrelated functions. If a given function is implemented in multiple
places, maintaining and updating that function can be hard. What would happen when one instance of a
function doesn’t get updated? Finding that error can be very challenging.

This means it is wise for developers of complex applications to maintain a visual model that shows where each
function lives so it can be updated if regulations or business requirements change.

Often this means that developers must take the time to document what they did, when changes were made, as
well as what the changes were meant to accomplish so that other developers aren’t forced to decipher mounds
of text to learn where a function is or how it works.

To be successful as a architect of distributed systems, a developer must be able to master complexity.

Approaches Developers Must Master

Developers must master decomposing and refactoring application architectures, thinking in terms of teams, and
growing their skill in approaches to rapid application development and deployment (DevOps). After all, they
must be able to think systematically about what functions are independent of one another and what functions
rely on the output of other functions to work. Functions that rely upon one other may be best implemented as
a single service. Implementing them as independent functions might create unnecessary complexity and result
in poor application performance and impose an unnecessary burden on the network.

Virtualization Technology Covers Many Bases

Virtualization is a far bigger category than just virtual machine software or containers. Both of these
functions are considered processing virtualization technology. There are at least seven different types of
virtualization technology in use in modern applications today. Virtualization technology is available to
enhance how users access applications, where and how applications execute, where and how processing happens,
how networking functions, where and how data is stored, how security is implemented, and how management
functions are accomplished. The following model of virtualization technology might be helpful to developers
when they are trying to get their arms around the concept of virtualization:

Designing Distributed

Figure 3: Architure of virtualized systems

Source: 7 Layer Virtualizaiton Model, VirtualizationReview.com

Think of Software-Defined Solutions

It is also important for developers to think in terms of “software defined” solutions. That is, to segment
the control from the actual processing so that functions can be automated and orchestrated.

Tools and Strategies That Can Help

Developers shouldn’t feel like they are on their own when wading into this complex world. Suppliers and
open-source communities offer a number of powerful tools. Various forms of virtualization technology can be
a developer’s best friend.

Virtualization Technology Can Be Your Best Friend

  • Containers make it possible to easily develop functions that can execute without
    interfering with one another and can be migrated from system to system based upon workload demands.
  • Orchestration technology makes it possible to control many functions to ensure they are
    performing well and are reliable. It can also restart or move them in a failure scenario.
  • Supports incremental development: functions can be developed in parallel and deployed
    as they are ready. They also can be updated with new features without requiring changes elsewhere.
  • Supports highly distributed systems: functions can be deployed locally in the
    enterprise data center or remotely in the data center of a cloud services provider.

Think In Terms of Services

This means that developers must think in terms of services and how services can communicate with one another.

Well-Defined APIs

Well defined APIs mean that multiple teams can work simultaneously and still know that everything will fit
together as planned. This typically means a bit more work up front, but it is well worth it in the end. Why?
Because overall development can be faster. It also makes documentation easier.

Support Rapid Application Development

This approach is also perfect for rapid application development and rapid prototyping, also known as DevOps.
Properly executed, DevOps also produces rapid time to deployment.

Think In Terms of Standards

Rather than relying on a single vendor, the developer of distributed systems would be wise to think in terms
of multi-vendor, international standards. This approach avoids vendor lock-in and makes finding expertise
much easier.

Summary

It’s interesting to note how guidelines for rapid application development and deployment of distributed
systems start with “take your time.” It is wise to plan out where you are going and what you are going to do
otherwise you are likely to end up somewhere else, having burned through your development budget, and have
little to show for it.

Sign up for Online Training

To continue to learn about the tools, technologies, and practices in the modern development landscape, sign up for free online training sessions. Our engineers host
weekly classes on Kubernetes, containers, CI/CD, security, and more.

Microservices vs. Monolithic Architectures

Freitag, 1 März, 2019

Enterprises are increasingly pressured by competitors and their own customers to get applications working and online quicker while also minimizing development costs. These divergent goals have forced enterprise IT organization to evolve rapidly. After undergoing one forced evolution after another since the 1960s, many are prepared to take the step away from monolithic application architectures to embrace the microservices approach.

Figure 1: Architecture differences between traditional monolithic applications and microservices

Figure 1: Architecture differences between traditional monolithic applications and microservices

Image courtesy of BMC

Higher Expectations and More Empowered Customers

Customers that are used to having worldwide access to products and services now expect enterprises to quickly respond to whatever other suppliers are doing.

CIO magazine, in reporting upon Ovum’s research, pointed out:

“Customers now have the upper hand in the customer journey. With more ways to shop and less time to do it, they don’t just gather information and complete transactions quickly. They often want to get it done on the go, preferably on a mobile device, without having to engage in drawn-out conversations.”

IT Under Pressure

This intense worldwide competition also forces enterprises to find new ways to cut costs or find new ways to be more efficient. Developers have seen this all before. This is just the newest iteration of the perennial call to “do more with less” that enterprise IT has faced for more than a decade. Even though IT budgets grow, they’ve learned, the investments are often in new IT services or better communications.

Figure 2: Forcasted 2018 worldwide IT spending growth

Figure 2: Forcasted 2018 worldwide IT spending growth

Source: Gartner Market Databook, 4Q17

As enterprise IT organizations face pressure to respond, they have had to revisit their development processes. The traditional two-year development cycle, previously acceptable, is no longer satisfactory. There is simply no time for that now.

Enterprise IT has also been forced to respond to a confluence of trends that are divergent and contradictory.

  • The introduction of inexpensive but high-performance network connectivity that allows distributed functions to communicate with one another across the network as fast as processes previously could communicate with one another inside of a single system.
  • The introduction of powerful microprocessors that offer mainframe-class performance in inexpensive and small packages. After standardizing on the X86 microprocessor architecture, enterprises are now being forced to consider other architectures to address their need for higher performance, lower cost, and both lower power consumption and heat production.
  • Internal system memory capacity continues to increase making it possible to deploy large-scale applications or application components in small systems.
  • External storage use is evolving away from the use of rotating media to solid state devices to increase capability, reduce latency, decrease overall cost, and deliver enormous capacity.
  • The evolution of open-source software and distributed computing functions make it possible for the enterprise to inexpensively add a herd of systems when new capabilities are needed rather than facing an expensive and time-consuming forklift upgrade to expand a central host system.
  • Customers demand instant and easy access to applications and data.

As enterprises address these trends, they soon discover that the approach that they had been relying on — focusing on making the best use of expensive systems and networks — needs to change. The most significant costs are now staffing, power, and cooling. This is in addition to the evolution they made nearly two decades ago when their focus shifted from monolithic mainframe computing to distributed, X86-based midrange systems.

The Next Steps in a Continuing Saga

Here’s what enterprise IT has done to respond to all of these trends.

They are choosing to move from using the traditional waterfall development approach to various forms of rapid application development. They also are moving away from compiled languages to interpreted or incrementally compiled languages such as Java, Python, or Ruby to improve developer productivity.

IDC, for example, predicts that:

“By 2021 65% of CIOs will expand agile/DevOps practices into the wider business to achieve the velocity necessary for innovation, execution, and change.”

Complex applications are increasingly designed as independent functions or “services” that can be hosted in several places on the network to improve both performance and application reliability. This approach means that it is possible to address changing business requirements as well as to add new features in one function without having to change anything else in parallel. NetworkWorld’s Andy Patrizio pointed out in his predictions for 2019 that he expects “Microservices and serverless computing take off.”

Another important change is that these services are being hosted in geographically distributed enterprise data centers, in the cloud, or both. Furthermore, functions can now reside in a customer’s pocket or in some combination of cloud-based or corporate systems.

What Does This Mean for You?

Addressing these trends means that enterprise developers and operations staff have to make some serious changes to their traditional approach including the following:

  • Developers must be willing to learn technologies that better fits today’s rapid application development methodology. An experienced “student” can learn quickly through online schools. For example, Learnpython.org offers free courses in Python, while codecademy offers free courses in Ruby, Java, and other languages.
  • They must also be willing to learn how to decompose application logic from a monolithic, static design to a collection of independent, but cooperating, microservices. Online courses are available for this too. One example of a course designed to help developers learn to “think in microservices” comes from IBM. Other courses are available from Lynda.com.
  • Developers must adopt new tools for creating and maintaining microservices that support quick and reliable communication between them. The use of various commercial and open-source messaging and management tools can help in this process. Rancher Labs, for example, offers open-source software for delivering Kurbernetes-as-a-service.
  • Operations professionals need to learn orchestration tools for containers and Kubernetes to understand how they allow teams to quickly develop and improve applications and services without losing control over data and security. Operations has long been the gatekeepers for enterprise data centers. After all, they may find their positions on the line if applications slow down or fail.
  • Operations staff must allow these functions to be hosted outside of the data centers they directly control. To make that point, analysts at Market Research Future recently published a report saying that, “the global cloud microservices market was valued at USD 584.4 million in 2017 and is expected to reach USD 2,146.7 million by the end of the forecast period with a CAGR of 25.0%”.
  • Application management and security issues must now be part of developers’ thinking. Once again, online courses are available to help individuals to develop expertise in this area. LinkedIn, for example, offers a course in how to become an IT Security Specialist.

It is important for both IT and operations staff to understand that the world of IT is moving rapidly and everyone must be focused on upgrading their skills and enhancing their expertise.

How Do Microservices Benefit the Enterprise?

This latest move to distributed computing offers a number of real and measurable benefits to the enterprise. Development time and cost can be sharply reduced after the IT organization incorporates this form of distributed computing. Afterwards, each service can be developed in parallel and refined as needed without requiring an entire application to be stopped or redesigned.

The development organization can focus on developer productivity and still bring new application functions or applications online quickly. The operations organization can focus on defining acceptable rules for application execution and allowing the orchestration and management tools to enforce them.

What New Challenges Do Enterprises Face?

Like any approach to IT, the adoption of a microservices architecture will include challenges as well as benefits.

Monitoring and managing many “moving parts” can be more challenging than dealing with a few monolithic applications. The adoption of an enterprise management framework can help address these challenges. Security in this type of distributed computing needs to be top of mind as well. As the number of independent functions grows on the network, each must be analyzed and protected.

Should All Monolithic Applications Migrate to Microservices?

Some monolithic applications can be difficult to change. This may be due to technological challenges or may be due to regulatory constraints. Some components in use today may have come from defunct suppliers, making changes difficult or impossible.

It can be both time consuming and costly for the organization to go through a complete audit process. Often, organizations continue investing in older applications much longer than is appropriate in the belief that they’re saving money.

It is possible to evaluate what an monolithic application does to learn if some individual functions can be separated and run as smaller, independent services. These can be implemented either as cloud-based services or as container-based microservices.

Rather than waiting and attempting to address older technology as a whole, it may be wise to undertake a series of incremental changes to make enhancing or replacing an established system more acceptable. This is very much like the old proverb, “the best time to plant a tree was 20 years ago. The second best time is now.”

Is the Change Worth It?

Enterprises that have made the move towards the adoption of microservices-based application architectures have commented that their IT costs are often reduced. They also often point out that once their team mastered this approach, it was far easier and quicker to add new features and functions when market demands changed.

If your enterprise hasn’t adopted this approach, it would be wise to learn more about it. Suppliers like Rancher Labs have helped their clients safely make this journey and they may be able to help your organization.

Go Deeper with Online Training

Get free online training on our container management software, Rancher, or continue your education with more advaned topics in the Kubernetes master classes.

Tags: ,,, Category: Products Comments closed

What Do People Love About Rancher?

Donnerstag, 28 Februar, 2019
Read the Guide to Kubernetes with Rancher
This guide shows the challenges in running Kubernetes in production and how Rancher helps.

More than 20,000 environments have chosen Rancher as the solution to make the Kubernetes adventure painless in as many ways as possible. More than 200 businesses across finance, health care, military, government, retail, manufacturing, and entertainment verticals engage with Rancher commercially because they recognize that Rancher simply works better than other solutions.

Why is this? Is it really about one feature set versus another feature set, or is it about the freedom and breathing room that come from having a better way?

A Tale of Two Houses

Imagine that you’re walking down a street, and each side of the street is lined with houses. The houses on one side were constructed over time by different builders, and you can see that although every house contains walls, a floor, a roof, doors, and windows, they’re all completely different. Some were built from custom plans, while others were modified over time by the owner to fit a personal need.

You see a person working on his house, and you stop to ask him about the construction. You learn that the company that built his house did so with special red bricks that only come from one place. He paid a great deal of money to import the bricks and have the house built, and he beams with pride as he tells you about it.

“It’s artesenal,” he tells you. “The company who built my house is one of the biggest companies in the world. They’ve been building houses for years, so they know what they’re doing. My house only took a month to build!”

“What if you want to expand?” You point to other houses on his side of the street. “Does the builder come out and do the work?”

“Nope! I decide what I want to build, and then I build it. I like doing it this way. Being hands-on makes me feel like I’m in control.”

Your gaze moves to the other side of the street, where the houses were built by following a different strategy. Each house has an identical core, and where an owner made customizations, each house with that customization has it constructed the same.

You see a man outside of one of the houses, relaxing on his porch and drinking tea. He waves at you, so you walk over and strike up a conversation with him.

“Can you tell me about your house?”

“My house?” He smiles at you. “Sure thing! All of the houses on this side were built by one company. They use pre-fabricated components that are built off-site, brought in and assembled. It only takes a day to build one!”

“What about adding rooms and other features?”

“It’s easy,” he replies. The company has a standard interface for rooms, terraces, and any other add-on. When I want to expand, I just call them, and they come out and connect the room. Everything is pre-wired, so it goes in and comes online almost as fast as I can think of it.”

You ask if he had to do any extra work to connect to public utilities.

“Not at all!” he exclaims. “There’s a panel inside where I can choose which provider I want to connect to. I just had to pick one. If I want to change it in the future, I make a different selection. The house lets me choose everything – lawn care service provider, window cleaner, painter, everything I need to make the house liveable and keep it running. I just go to the panel, make my choice, and then go back to living.

“And best of all, my house was free.”

Rancher Always Works For You

Rancher Labs has designed Rancher to do the heaviest tasks around building and maintaining Kubernetes clusters.

Easily Launch Secure Clusters

Let’s start with the installation. Are you installing on bare metal? Cloud instances? Hosted provider? A mix? Do you want to give others the ability to deploy their own clusters, or do you want the flexibility to use multiple providers?

Maybe you just want to use AWS, or GCP, so multiple providers isn’t a big deal. Flexibility is still important. Your requirements today might be different in a month or a year.

With Rancher you can simply fire up a new cluster in another provider and begin migrating workloads, all from within the same interface.

Global Identity and RBAC

Whether you’re using multiple providers or not, the normal way of configuring access to a single cluster in one provider requires work. Access control policies take time to configure and maintain, and generally, once provisioned, are forgotten. If using multiple providers, it’s like learning multiple languages. Russian for AWS, Swahili for Google, Flemish for Azure, Uzbek for DigitalOcean or Rackspace…and if someone leaves the organization, who knows what they had access to? Who remembers how to speak Latin?

Rancher connects to backend identity providers, and from a global configuration it applies roles and policies to all of the clusters that it manages.

When you can deploy and manage multiple clusters as easily as you can a single one, and when you can do so securely, then it’s no big deal to spin up a cluster for UAT as part of the CI/CD test suite. It’s trivial to let developers have their own cluster to work on. You could even let multiple teams can share one cluster.

Solutions for Cluster Multi-Tenancy

How do you keep people from stepping on each other?

You can use Kubernetes Namespaces, but provisioning Roles across multiple Namespaces is tedious. Rancher collects Namespaces into Projects and lets you map Roles to the Project. This creates single-cluster multi-tenancy, so now you can have multiple teams, each only able to interact with their own Namespaces, all on the same cluster. You can have a dev/staging environment built exactly like production, and then you can easily get into the CD part of CI/CD.

Tools for Day Two Operations

What about all of the add-on tools? Monitoring. Alerts. Log shipping. Pipelines. You could provision and configure all of this yourself for every cluster, but it takes time. It’s easy to do wrong. It requires skills that internal staff may not have – do you want your staff learning all of the tools above, or do you want them focusing on business initiatives that generate revenue? To put it another way, do you want to spend your day spinning copper wire to connect to the phone system, or would you rather press a button and be done with it?

Rancher ships with tools for monitoring your clusters, dashboards for visualizing metrics, an engine for generating alerts and sending notifications, a pipeline system to enable CI/CD for those not already using an external system. With a click it ships logs off to Elasticsearch, Kafka, Fluentd, Splunk, or syslog.

Designed to Grow With You

The more a Kubernetes solution scales (the bigger or more complicated that it gets), the more important it is to have fast, repeatable ways to do things. What about using scripts like Ansible, Terraform, kops, or kubespray to launch clusters? They stop once the cluster is launched. If you want more, you have to script it yourself, and this adds a dependency on an internal asset to maintain and support those scripts. We’ve all been at companies where the person with the special powers left, and everyone who stayed had to scramble to figure out how to keep everything running. Why go down that path when there’s a better way?

Rancher makes everything related to launching and managing clusters easy, repeatable, fast, and within the skill set of everyone on the team. It spins up clusters reliably in any provider in minutes, and then it gives you a standard, unified interface for interacting with those clusters via UI or CLI. You don’t need to learn each provider’s nuances. You don’t need to manage credentials in each provider. You don’t need to create configuration files to add the clusters to monitoring systems. You don’t need to do a bunch of work on the hosts before installing Kubernetes. You don’t need to go to multiple places to do different things – everything is in one place, easy to find, and easy to use.

No Vendor Lock-In

This is significant. Companies who sell you a Kubernetes solution have a vested interest in keeping you locked to their platform. You have to run their operating system or use their facilities. You can only run certain software versions or use certain components. You can only buy complementary services from vendors they partner with.

Rancher Labs believes in something different. They believe that your success comes from the freedom to choose what’s best for you. Choose the operating system that you want to use. Build your systems in the provider you like best. If you want to build in multiple providers, Rancher gives you the tools to manage them as easily as you manage one. Use any provisioner.

What Rancher accelerates is the time between your decision to do something and when that thing is up and running. Rancher gets you out the gate and onto the track faster than any other solution.

The Wolf in a DIY Costume

Those who say that they want to “go vanilla” or “DIY” are usually looking at the cost of an alternative solution. Rancher is open source and free to use, so there’s no risk in trying it out and seeing what it does. It will even uninstall cleanly if you decide not to continue with it.

If you’re new to Kubernetes or if you’re not in a hands-on, in-the-trenches role, you might not know just how much work goes into correctly building and maintaining a single Kubernetes cluster, let alone multiple clusters. If you go the “vanilla Kubernetes” route with the hope that you’ll get a better ROI, it won’t work out. You’ll pay for it somewhere else, either in staff time, additional headcount, lost opportunity, downtime, or other places where time constraints interfere with progress.

Rancher takes all of the maintenance tasks for clusters and turns them into a workflow that saves time and money while keeping everything truly enterprise-grade. It will do this for single and multi-cluster Kubernetes environments, on-premise or in the cloud, for direct use or for business units offering Kubernetes-as-a-service, all from the same installation. It will even import the Kubernetes clusters you’ve already deployed and start managing them.

Having more than 20,000 deployments in production is something that we’re proud of. Being the container management platform for mission-critical applications in over 200 companies across so many verticals also makes us proud.

What we would really like is to have you be part of our community.

Join us in showing the world that there’s a better way. Download Rancher and start living in the house you deserve.

Read the Guide to Kubernetes with Rancher
This guide shows the challenges in running Kubernetes in production and how Rancher helps.
Tags: ,,, Category: Allgemein Comments closed

Kubernetes in the Region: Observations and an Offer

Dienstag, 19 Februar, 2019

Find a Rodeo workshop near you
Rancher Rodeos are free, in-depth workshops where you can learn to deploy containers and Kubernetes in production.

Since joining Rancher Labs to head up the Australia, New Zealand, and Singapore region, my day revolves around discussing containers/Kubernetes use cases and adoption with many of the top enterprises, DevOps groups, and executives in the area. Not only is this a great learning experience and a fantastic way to meet people, it is also a huge eye opener into the many reasons why Kubernetes adoption is growing so rapidly and what the current challenges are. I want to quickly share some of my observations and make an offer for you to join us for some free hands-on training.

Some Observations

Everyone is Doing Something with Kubernetes

It doesn’t matter which event, meetup, or customer discussion I’m in — every enterprise is doing something with Kubernetes. It’s like the adoption of virtualization, only the discussion is slightly different. It’s not so much about which vendor or standard — Kubernetes is the focus. Instead, it’s about how to do Kubernetes and what are the associated best practices, scalable architectures, and security considerations.

Kubernetes Native, but How to Do It at Operational Scale?

The community and ecosystem around Kubernetes is growing every day, with strong capabilities, so there is a strong desire to stay on “native” Kubernetes and not get sucked down a branch, fork, or vendor-specific offshoot of Kubernetes. It seems that most enterprise and groups begin this way and get into production with Kubernetes. However, there is a clear point at which scale becomes an operational challenge and basic tools need to be supplemented or worked on to help manage multiple Kubernetes namespaces, multiple clusters, authentication, RBAC, policy, monitoring, and logging across many development teams.

It’s About Consuming Kubernetes, Not “Making” Kubernetes

Nobody wants to be in the business of creating Kubernetes snowflakes, or be in the business of allocating their resources to do work that adds no value. There is a learning curve for operationalizing Kubernetes, using Kubernetes, and deploying workloads into Kubernetes environments. Many enterprises are looking for ways to eliminate the learning curve or the need for specialized skills and instead just consume Kubernetes, using a Kubernetes-as-a-Service model. Much larger and faster gains can be made if consuming Kubernetes becomes the focus instead of making Kubernetes.

Both On-Premise and Public Cloud Kubernetes

As enterprises grow, iterate, and merge, an ever-increasing mixture of infrastructure environments and needs emerges. The same enterprise may create Kubernetes clusters using on-premise bare metal, with OpenStack and VMware-type infrastructures, as well as out on public clouds using Amazon, Google, Azure, Alibaba, and others. The portability and rapid pace of containers lends itself to these hybrid or multi-cloud scenarios (more so than VMs) and is quite quickly sprawling in this way. There is also quite an urgent need for air-gapped Kubernetes environments.

Public Cloud Kubernetes Providers

Most enterprises are now seriously looking at the Kubernetes services offered by public cloud providers, like EKS (now available in Australia & Singapore), GKE, and AKS. These are viable options and really do support some of the notions mentioned in my other observations, like consumability. Technical discussions here become much less about the Kubernetes cluster control planes and architecture, and more about integration of these clusters into enterprise management capabilities like authentication domains, security models, deployment pipelines, and multi-cloud strategies (e.g. on-premise or multiple public clouds).

Our Offer

We run free, half-day training sessions called Rancher Rodeos throughout the world. Among others, this month we have Rodeos in Sydney, Melbourne, and Singapore (registration for Singapore is not open yet). During these sessions, DevOps and IT professionals can get hands-on experience with how to quickly deploy an enterprise-ready Kubernetes environment on any infrastructure or cloud provider (or multiples of these) using Rancher. We will show how Rancher helps make enterprise Kubernetes consumable and native, with rapid results for development and infrastructure teams.

Please take us up on the offer, register here, and join us!

Find a Rodeo workshop near you
Rancher Rodeos are free, in-depth workshops where you can learn to deploy containers and Kubernetes in production.

Neues Strategiepapier von Crisp Research: Warum Open Source den digitalen Wandel beschleunigt

Mittwoch, 6 Februar, 2019

Verschlafen deutsche Unternehmen die digitale Transformation? Im aktuellen Digitalisierungsindex der EU belegt Deutschland nur einen Platz im Mittelfeld. Eine neues Strategiepapier von Crisp Research zeigt jedoch, dass die Unternehmen hierzulande aufholen: Bereits 2020 will jedes fünfte Unternehmen mehr als 20 Prozent seines Umsatzes mit digitalen Produkten und Geschäftsmodellen erwirtschaften. Open-Source-Technologie kann nach Ansicht der Crisp-Experten ein Schlüssel sein, um dieses Ziel zu erreichen

Innovationstempo erhöhen – ohne Angst vor Fehler

„Fail Fast, Fail Often“ lautet ein Mantra der Startup-Kultur. In jedem Scheitern liegt demnach die Chance, es beim nächsten Mal besser zu machen und am Ende doch erfolgreich zu sein. Gerade in digitalen Innovationsprozessen müssen Unternehmen die Angst vor Fehlern ablegen und es wagen, neue Wege zu gehen. Open-Source-Software hilft ihnen dabei. Denn die Open Source bietet alle notwendigen Technologien bei gleichzeitiger Investitionssicherheit und Vermeidung proprietärer Sackgassen. Empfehlenswert ist es dabei, mit einem überschaubaren Einzelprojekt zu starten, das bereits in ein Gesamtkonzept integriert ist und rasche Ergebnisse liefert.

„Open-Source-Software erleichtert die Korrektur von Build- und Buy-Entscheidungen, da es sich dabei nicht mehr um eine binäre Option handelt“, schreiben die Experten von Crisp Research. „In der Vergangenheit war Software eher ein Take-it-or-leave-it-Angebot. Mit Open Source können Unternehmen die Software oder auch Hardware den eigenen speziellen Bedürfnissen anpassen, ohne alles von Grund auf neu entwickeln zu müssen.“

Von Hierarchien und Silos zu offener Vernetzung

Nicht nur wirtschaftliche Gründe sprechen bei der digitalen Transformation für den Einsatz von Open-Source-Technologie. Es geht auch um das Mindset und die agilen Arbeitsmethoden, die die Entwicklung freier Software schon immer prägten. Enterprise Open-Source-Anbieter sind beispielsweise seit vielen Jahren mit DevOps-Prinzipien vertraut und können daher wertvolle Impulse für die Weiterentwicklung von Organisations- und Kommunikationsstrukturen geben.

In vielen Entwicklungsprojekten arbeiten Anbieter, Kunden und Partner heute eng zusammen – über Team- und Unternehmensgrenzen hinweg. „Dadurch kommt es schneller von einer Innovation zu einem bedarfsgerechten und offenen Standard“, so die Experten von Crisp Research. „Die offenen Schnittstellen und der offene Kern ermöglichen eine zukunftssichere Nutzung von Komponenten.“

Zuverlässige und skalierbare Basis für digitale Business-Plattformen

Entscheidend für den Erfolg einer digitalen Innovation ist schließlich das Tempo, mit dem die Idee in eine funktionsfähige Business-Plattform verwandelt wird. Und auch hier sehen die Crisp-Experten den Open-Source-Ansatz im Vorteil. Viele Schlüsseltechnologien für die digitale Transformation – von Big Data über Mobile Computing bis zur Cloud – nutzen heute bereits offene Software-Standards. Und längst haben Open-Source-Lösungen ihre Zuverlässigkeit und Leistungsfähigkeit in anspruchsvollen Anwendungsszenarien unter Beweis gestellt. Auch aus diesem Grund hat sich etwa SAP entschieden, seine Produkte HANA und S/4 HANA künftig ausschließlich auf Linux zu betreiben.

Enterprise-Linux-Anbieter wie SUSE schließen dabei die Lücke zwischen der Open-Source-Community und den Anforderungen von Unternehmen an einen stabilen Systembetrieb. Dazu bieten sie unter anderem systematische Qualitätssicherung und maßgeschneiderte Support-Services an. Zudem treiben sie Entwicklungen voran, die die aktuellen Herausforderungen ihrer Kunden adressieren. Die Software-definierte Storage-Lösung von SUSE hilft Unternehmen zum Beispiel, mit dem exponentiellen Datenwachstum Schritt zu halten und bezahlbaren Speicher für ressourcenhungrige Anwendungen zur Verfügung zu stellen.

Laden Sie sich jetzt das Strategiepapier von Crisp Research kostenfrei herunter und erfahren Sie, wie Open-Source-Lösungen zum Motor der digitalen Transformation werden. Neben aktuellen Zahlen und Projektbeispielen enthält das Dokument auch konkrete Handlungsempfehlungen für die Praxis.