How to Save $11.6M Running SAP Hana?

Thursday, 18 May, 2017

So, you’re an enterprise decision maker getting squeezed from every corner of your business to reduce cost, but improve your customer intimacy, product quality, operational efficiency, blah blah blah.  You’re hearing it in your sleep, every waking moment of your day by day.

You’re looking for answers and guess what?  SLES for SAP on Google Cloud Platform could be your $11.6 million dollar answer according to Forrester “Total Economic Impact Study” https://www.sap.com/documents/2014/06/362d7a23-0a7c-0010-82c7-eda71af511fa.html

What are you waiting for?

Try SLES for SAP on Google Platform https://cloud.google.com/sap/

Do you really want to explain why you didn’t attempt to save $11.6M?

Grow in any direction with SUSE & Lenovo: Our tour at the SAP SMB Innovation Summits 2017

Tuesday, 9 May, 2017

For the first time SAP organized an Innovation Summit focused on the specific needs of small and medium businesses to help its customers and partners worldwide grow in any direction.

And we, at SUSE, are very proud to be a part of Lenovo`s Solution for SAP B1, an integrated solution with System X and ThinkServer running on SUSE Linux Enterprise Server for SAP applications. 🙂

We joined forces in Macao, Berlin and Fort Lauderdale – Florida last March and April to promote our partnership on SAP solutions.

The summits presented on innovations of the next generation of simplified, integrated business solutions and both Lenovo and SUSE lead on that market.

SUSE Linux Enterprise Server for SAP Applications is the Leading Platform for SAP Applications on Linux and our chameleon was all over the place across the globe. Check us out:

Show Floor in Macao

Lenovo Booth in Berlin

The Chameleon helping out our team in Fort Lauderdale

 

 

 

 

 

 

 

Keep up with our Strategic Partnership at suse.com/lenovo and we will see you soon!

Do Microservices Make SOA Irrelevant?

Tuesday, 9 May, 2017

Is service-oriented architecture, or SOA, dead? You may be tempted to
think so. But that’s not really true. Yes, SOA itself may have receded
into the shadows as newer ideas have come forth, yet the remnants of SOA
are still providing the fuel that is propelling the microservices market
forward. That’s because incorporating SOA principles into the design and
build-out of microservices is the best way to ensure that your product
or service offering is well positioned for the long term. In this sense,
understanding SOA is crucial for succeeding in the microservices world.
In this article, I’ll explain which SOA principles you should adopt when
designing a microservices app.

Introduction

In today’s mobile-first development environment, where code is king, it
is easier than ever to build a service that has a RESTful interface,
connect it to a datastore and call it a day. If you want to go the extra
mile, piece together a few public software services (free or paid), and
you can have yourself a proper continuous delivery pipeline. Welcome to
the modern Web and your fully buzzworthy-compliant application
development process. In many ways, microservices are a direct descendant
of SOA, and a bit like the punk rock of the services world. No strict
rules, just some basic principles that loosely keep everyone on the same
page. And just like punk rock, microservices initially embraced a
do-it-yourself ethic, but has been evolving and picking up some
structure which moved microservices into the mainstream. It’s not just
the dot com or Web companies that use microservices anymore—all
companies are interested.

Definitions

For the purposes of this discussion, the following are the definitions I
will be using.

Microservices: The implementation of a specific business function,
delivered as a separate deployable artifact, using queuing or a RESTful
(JSON) interface, which can be written in any language, and that
leverages a continuous delivery pipeline.

SOA: Component-based architecture which has the goal of driving
reuse across the technology portfolio within an organization. These
components need to be loosely coupled, and can be services or libraries
which are centrally governed and require an organization to use a single
technology stack to maximize reusability.

Positive things about microservices-based development

As you can tell, microservices possess a couple of distinct features
that SOA lacked, and they are good:

Allowing smaller, self-sufficient teams to own a product/service
that supports a specific business function has drastically improved
business agility and IT responsiveness (to any directions that the
business units they support) want to take.

Automated builds and testing, while possible under SOA, are now
serious table stakes.

Allowing teams to use the tools they want, primarily around which
language and IDE to use.

Using-agile based development with direct access to the business.
Microservices and mobile development teams have successfully shown
businesses how technologists can adapt to and accept constant feedback.
Waterfall software delivery methods suffered from unnecessary overhead
and extended delivery dates as the business changed while the
development team was off creating products that often didn’t meet the
business’ needs by the time they were delivered. Even iterative
development methodologies like the Rational Unified Process (RUP) had
layers of abstraction between the business, product development, and the
developers doing the actual work.

A universal understanding of the minimum granularity of a service.
There are arguments around “Is adding a client a business function, or
is client management a business function?” So it isn’t perfect, but at
least both can be understood by the business side that actually runs the
business. You may not want to believe it, but technology is not the
entire business (for most of the world’s enterprises anyway). Back in
the days when SOA was the king on the hill, some services performed
nothing but a single database operation, and other services were adding
a client to the system, which led to nothing but confusion from business
when IT did not have a consistent answer.

How can SOA help?

Want to learn more about
Docker, Kubernetes, and Rancher? Join us for free online
training After reading those definitions, you are probably
thinking, “Microservices sounds so much better.” You’re right. It is the
next evolution for a reason, except that it threw away a lot of the
lessons that were hard-learned in the SOA world. It gave up all the good
things SOA tried to accomplish because the IT vendors in the space
morphed everything to push more product. Enterprise integration patterns
(which define how new technologies or concepts are adopted by
enterprises) are a key place where microservices are leveraging the work
done by the SOA world. Everyone involved in the integration space can
benefit from these patterns, as they are concepts, and microservices are
a great technological way to implement them. Below, I’ve listed two
other areas where SOA principles are being applied inside the
microservices ecosystem to great success.

API Gateways (née ESB)

Microservices encourage point-to-point connections, and that each client
take care of their own translations for dates and other nuanced things.
This is just not sustainable as the number of microservices available
from most companies skyrockets. So in comes the concept of an Enterprise
Service Bus (ESB), which provides a means of communication between
different application in an SOA environment. SOA originally intended the
ESB to be used to carry things between service components—not to be
the hub and spoke of the entire enterprise, which is what vendors
pushed, and large companies bought into, and left such a bad taste in
people’s mouths. The successful products in the ESB have changed into
today’s API gateway vendors, which is a centralized way for a single
organization to manage endpoints they are presenting to the world, and
provide translation to older services (often SOA/SOAP) that haven’t been
touched in years but are vital to the business.

Overarching standards

SOA had WS-* standards. They were heavy-handed, but guaranteed
interoperability (mostly). Having these standards in place, especially
the more common ones like WS-Security and WS-Federation, allowed
enterprises to call services used in their partner systems—in terms
that anyone could understand, though they were just a checklist.
Microservices have begun to formalize a set of standards and the vendors
that provide the services. The OAuth and OpenID authentication
frameworks are two great examples. As microservices mature, building
everything in-house is fun, fulfilling, and great for the ego, but
ultimately frustrating as it creates a lot of technical debt with code
that constantly needs to be massaged as new features are introduced. The
other side where standards are rapidly consolidating is API design and
descriptions. In the SOA world, there was one way. It was ugly and
barely readable by humans, but the Web service definition language
(WSDL), a standardized format for cataloguing network services, was
universal. As of April 2017, all major parties (including Google, IBM,
Microsoft, MuleSoft, and Salesforce.com) involved in providing tools to
build RESTful APIs are members of the OpenAPI Initiative. What was once
a fractured market with multiple standards (JSON API, WASL, RAML, and
Swagger) is now becoming a single way for everything to be described.

Conclusion

SOA originated as a set of concepts, which are the same core concepts as
microservices architecture. Where SOA fell down was driving too much
governance and not enough “Just get it done.” For microservices to
continue to survive, the teams leveraging them need to embrace their
ancestry, continue to steal the best of the ideas, and reintroduce them
using agile development methodologies—with a healthy dose of
anti-governance to stop SOA
Governance

from reappearing. And then, there’s the side job of keeping ITIL and
friends safely inside the operational teams where they thrive. Vince
Power is a Solution Architect who has a focus on cloud adoption and
technology implementations using open source-based technologies. He has
extensive experience with core computing and networking (IaaS), identity
and access management (IAM), application platforms (PaaS), and
continuous delivery.

Tags: Category: Rancher Blog Comments closed

SUSE Sessions you can’t afford to miss at OpenStack Summit Boston

Friday, 5 May, 2017

The following blog has been contributed by Armando Migliaccio, SUSE, Distinguished Engineer.

 

 

OpenStack Summit Boston starts next Monday and here at SUSE, we are in full swing ensuring our sessions provide all the insights you need to assist you with your digital transformation.  The Summit is a four-day event full of presentations, panels, workshops, and educational opportunities to explore through the OpenStack Academy and while you can expect to hear conversations around cloud strategy, business case development, operational best practices and technical deep dives, I would like to personally invite you to join the conversations I will be leading next week.

Tuesday, May 9 | 12:10-12:45 pm | Get Me a Network: From Boot to Woot! | Level 3 – Ballroom C

In this session, I will be joined by Matt Riedemann from Huawei and we will present get-me-a-network, an OpenStack feature which is fully complete since the Newton release. With get-me-a-network rather than letting cloud users handle networking setup steps themselves, nova and neutron coordinate the provisioning of networking resources during the VM boot process, thus making the networking setup transparent. If you want to learn how to use this feature in your deployment or want to provide feedback on how to improve this feature, please join us by signing up for session.

Wednesday, May 10 | 9:50-10:30 am | Project Update – Neutron | Level 2 – MR 203

Join Kevin Benton and I, (both respectively current and former Neutron PTLs) as we update the audience on the latest and forthcoming features being worked on in the Neutron project. Developers who are interested in contributing to this project are strongly encouraged to attend, as are users and product managers who want to know more about the project’s latest features, their value to users, and the development team’s road-map. Please join us by signing up for this session.

Wednesday, May 10 | 2:40-3:20 pm | Being a Project Team Lead (PTL): The Good, the Bad and the Ugly | Level 2 – MR 207

Considering becoming a Project Team Lead? Attend this session first so that Steve Martinelli (IBM), Matt Riedemann (Huawei) and myself can provide you with the good, the bad and the ugly!  We will take you on a light-hearted journey on what it means to be the Project Team Lead for an Openstack project. We all have a combined total of nine OpenStack releases under our belts and will teach you the secrets about leading in an open source community and what sacrifices are necessary in order to be successful. If you are interested in being more effective in open source communities, please join us by signing up for session.

Be sure to add these sessions to your schedule and check out the rest of what the SUSE team has in store for this event! Stop by booth B6 to stay hi!

 

Armando Migliaccio has been the PTL for the Mitaka, Newton and Ocata releases of the OpenStack Neutron Project. He has been involved in the OpenStack community since its early days, and has dealt with a number of OpenStack projects, and solutions in various capacities. Most recently he has been working in various open source projects, like OpenDaylight and Open vSwitch to help the industry usher in a new era of networking. When he is away from his desk, Armando enjoys sunny California between one travel and another. 

SUSE Enterprise Storage 5 Beta 1 is available!

Tuesday, 2 May, 2017

Today, we are happy to announce our first Public Beta for SUSE Enterprise Storage 5!

 

We are inviting all developers, IT administrators, power users, and geekos worldwide to beta test the SUSE Enterprise Storage product in our SUSE Public Beta Program.

SUSE Enterprise Storage 5 Beta 1 is available now! if you are interested we strongly recommend you to subscribe to our storage-beta Mailing List and to visit our SUSE Enterprise Storage Beta dedicated web page.

 

SUSE Enterprise Storage is an intelligent software-defined storage solution, powered by Ceph Technology that enables IT to transform their enterprise storage infrastructure to deliver highly scalable and resilient storage that is cost-efficient and able to seamlessly adapt to changing business and data demands.

More information about the SUSE Enterprise Storage 5 beta product:

SUSE Enterprise Storage 5, based on the Ceph Luminous release, broadens the scope and use cases for the SUSE Software Defined Storage solution. The new “BlueStore” enablement increases performance by a factor of two or more. More pervasive support of erasure coding increases the efficiency of the fault tolerant solution. And efficiency is enhanced with data compression.

SUSE Enterprise Storage 5 is the ideal solution for Compliance, Archive, Backup and Large Data. Large data applications include: video surveillance, CCTV, online presence and training, streaming media, X-rays, seismic processing, genomic mapping, CAD. Backup and archive applications include: Veritas NetBackup, Commvault, HPE Data Protector, and compliance solutions such as iTernity.

Technical details:

  • The ability to service environments that require high levels of performance through the enablement of the “BlueStore” a new storage backend for Ceph. SUSE Enterprise Storage 5 doubles the write performance of previous releases, coupled with significant reductions in I/O latency.
  • The ability to free up capacity and reduce data footprint via the BlueStore enabled Data Compression feature.
  • Increased disk space efficiency of a fault tolerant solution through enablement of Erasure coding for Replicated block devices and CephFS data.
  • Lowered operational cost with an expanded advanced graphical user interface for simplified management and improved cost efficiency, using the next generation openATTIC open source storage management system.
  • Simplified cluster management and orchestration through enhanced Salt integration.
  • Production support of the NFS Gateway for enabling the cohabitation of legacy applications that need a filesystem interface to access data with cloud native applications.
  • Technology preview of Ceph’s ability to export a file system to CIFS/Samba for heterogeneous connectivity

Year 1

Tuesday, 2 May, 2017

A little more than a year ago, when I told my (swiss) wife that I was resigning from one of the most stable and profitable corporations in the country to join an open source company, she thought it was another one of my weird jokes (especially as the date was April 1st, April fool’s day). Reasons were obvious to me, and I think she also gets it by now 🙂

It both feels like it was yesterday as well as like it was a long time ago considering all the things that happened in that period.

It’s really been a fantastic year where I had the chance to meet many, many amazing colleagues, meet with communities we’re involved with, partners and customers we’re serving. We’ve also released a lot of new versions of our products, expanded and launched a lot of new partnerships, kept growing our business, completed our first acquisitions, welcomed new colleagues etc. etc. And among other conferences and events I’ve attended, my first SUSECON will remain as a great memory and illustration of what SUSE is all about: open source, technology, partners, customers, and fun!

Looking at what’s coming next year, this is no less exciting with more new colleagues to work with and learn from, with new solutions we’re going to launch and more!

Tags: Category: Expert Views Comments (0)

Looking ahead to SUSECON 2017

Friday, 28 April, 2017

Is it that time again? You bet. SUSECON is right around the corner and being held in Prague, Czechia, September 25-29, 2017.

If you’ve attended SUSECON in the past you know that anything can happen (live music, giveaways, Minecraft on a giant screen, to name a few…) but more importantly, the event offers 100+ hours of hands-on training,  140+ educational sessions including 60 tutorials, certification opportunities, and more.

Why should you attend?

Attendees will learn how they can use open software-defined infrastructure and application platforms to reduce costs and complexity, anticipate and quickly leverage the latest advancements, and move the business forward while reducing unnecessary risk. SUSECON is also a great networking opportunity where you can make tons of connections, old and new.

Interested in presenting?

The call for Papers is now open! SUSE is officially accepting presentation, demonstration, workshop, and lab submissions and we encourage our customers and partners to submit proposals for the following topics:

  • Big Data
  • Business Applications & Middleware on Linux
  • Cloud Technology / Cloud Infrastructure
  • Distributed Storage
  • Enterprise Linux
  • High Availability
  • High Performance Computing & Real Time
  • Interoperability in Heterogeneous Environments
  • Linux on Mainframes
  • Linux Systems Management
  • Open Source Community
  • Retail and POS Infrastructure
  • SAP Applications on Linux
  • Security & Compliance
  • Software-defined Solutions
  • Support & Maintenance
  • UNIX to Linux Transitions
  • Virtualization Technologies

Join us at SUSECON 2017 and see how “There’s More to “Open” than Just the Code.” Register today!

Press Release: Rancher Labs Partners with Docker to Embed Docker Enterprise Edition into Rancher Platform

Tuesday, 18 April, 2017

Docker Enterprise Edition technology and support now available from Rancher Labs

Cupertino, Calif. – April 18, 2017 – Rancher
Labs
, a provider of container management
software, today announced it has partnered with
Docker to integrate Docker Enterprise Edition
(Docker EE) Basic into its Rancher container management platform. Users
will be able to access the usability, security and portability benefits
of Docker EE through the easy to use Rancher interface. Docker provides
a powerful combination of runtime with integrated orchestration,
security and networking capabilities. Rancher provides users with easy
access to these Docker EE capabilities, as well as the Rancher
platform’s rich set of infrastructure services and other container
orchestration tools. Users will now be able to purchase support for both
Docker Enterprise Edition and the Rancher container management platform
directly from Rancher Labs. “Since we started Rancher Labs, we have
strived to provide users with a native Docker experience,” said Sheng
Liang, co-founder and CEO, Rancher Labs. “As a result of this
partnership, the native Docker experience in the Rancher platform
expands to include Docker’s enterprise-grade security, management and
orchestration capabilities, all of which is fully supported by Rancher
Labs.” Rancher is a comprehensive container management platform that, in
conjunction with Docker EE, helps to further reduce the barriers to
adopting containers. Users no longer need to develop the technical
skills required to integrate a complex set of open source technologies.
Infrastructure services and drivers, such as networking, storage and
load balancers, are easily configured for each Docker EE environment.
The robust Rancher application catalog makes it simple to package
configuration files as templates and share them across the organization.
The partnership enables Rancher customers to obtain official support
from Rancher Labs for Docker Enterprise Edition. Docker EE is a fully
integrated container platform that includes built in orchestration
(swarm mode), security, networking, application composition, and many
other aspects of the container lifecycle. Rancher users will now be able
to easily deploy Docker Enterprise Edition clusters and take advantage
of features such as:

  • Certified infrastructure, which provides an integrated
    environment for enterprise Linux (CentOS, Oracle Linux, RHEL, SLES,
    Ubuntu) Windows Server 2016, and Cloud providers like AWS and Azure.
  • Certified containers that provide trusted ISV products packaged
    and distributed as Docker containers – built with secure best
    practices cooperative support.
  • Certified networking and volume plugins, making it easy to
    download and install containers to the Docker EE environment.

“The release of Docker Enterprise Edition last month was a huge
milestone for us due to its integrated, and broad support for both Linux
and Windows operating systems, as well as for cloud providers, including
AWS and Azure,” said Nick Stinemates, VP Business Development &
Technical Alliances, Docker. “We are committed to offering our users
choice, so it was natural to partner with Rancher Labs to embed Docker
Enterprise Edition into the Rancher platform. Users will now have the
ability to run Docker Enterprise Edition on any cloud from the easy to
use Rancher interface, while also benefitting from a Docker solution
that provides a simplified yet rich user experience with its integrated
runtime, multi-tenant orchestration, security, and management
capabilities as well as access to an ecosystem of certified
technologies.”

Product Availability

Rancher with Docker EE Basic is available in the US and Europe
immediately, with more advanced editions and other territories planned
for future. For additional information on Rancher software and to learn
more about Rancher Labs, please visit
www.rancher.com or contact
sales@rancher.com. Supporting Resources

  • Company blog
  • Twitter
  • LinkedIn

About Rancher Labs Rancher Labs builds
innovative, open source software for enterprises leveraging containers
to accelerate software development and improve IT operations. With
infrastructure services management and robust container orchestration,
as well as commercially-supported distributions of Kubernetes, Mesos and
Docker Enterprise Edition, the flagship
Rancher container management platform
allows users to easily manage all aspects of running containers in
production, on any infrastructure.
RancherOS is a simplified Linux
distribution built from containers for running containers. For
additional information, please visit
www.rancher.com. All product and company
names herein may be trademarks of their registered owners.
Media
Contact
Eleni Laughlin MindsharePR (510) 406-0798
eleni@mindsharepr.com

Tags: , Category: Uncategorized Comments closed

Integrating SUSE Linux Enterprise Instances With Amazon EC2 Systems Manager

Monday, 17 April, 2017

At AWS re:Invent 2016 Amazon announced the availability of Amazon EC2 Systems Manager. AWS SSM is a collection of capabilities that helps automate management tasks in a hybrid cloud environment. This provides the ability to manage your existing on-premise infrastructure seamlessly with AWS.

Some of the features available in AWS SSM include:

  • Run Command – Remotely and securely manage the configuration of your managed instances at scale.
  • State Manager – Automate the process of keeping your managed instances in a defined state.
  • Inventory Manager – Automate the process of collecting software inventory from managed instances.
  • Automation – Automate common maintenance and deployment tasks.

Additional capabilities shared across the four services include:

  • Maintenance Window – Set up recurring schedules for managed instances to execute administrative tasks like installing patches and updates without interrupting business-critical operations.
  • Parameter Store – Centralize the management of configuration data.

The SSM User Guide provides all the details of the features offered by the service. The following outlines how to get SSM setup on your SUSE Linux Enterprise Server instances.

SSM Setup

For this tutorial we will focus on EC2 instances and the Run Command. For more information on setting up SSM for on-premise systems see the Amazon user guide “Setting Up Systems Manager in Hybrid Environments” section.

The following steps are required to get started with AWS SSM:

  • Launch an instance with the proper role
  • Install the amazon-ssm-agent on the new instance
  • (Optional) Add permssions to your user

To enable system management on an instance the instance must be launched with the proper role. See the “Configuring Security Roles for Systems Manager” section of the users guide.

Once the EC2 instance is running it’s time to install the agent. For SUSE Linux Enterprise Server the agent is available in the Public Cloud Module. Use the following commands to install, enable and start the SSM agent (as root).

zypper refresh

zypper in amazon-ssm-agent

systemctl enable amazon-ssm-agent

systemctl start amazon-ssm-agent

The agent is now running on the instance and ready to accept commands.

Remote Management with aws-cli

With the setup complete we can now manage the instance remotely and set up automated tasks. Systems with a running SSM agent can be managed with the aws-cli or through the web console. SUSE Linux Enterprise Server 12 and later images have the aws-cli package pre-installed and you can configure the CLI with:

aws configure

If you want to run the aws-cli on your local system, the package is part of the Public Cloud Module repository and can be installed by running (as root):

zypper in aws-cli

At this point we should now have a SUSE Linux Enterprise Server instance running with the proper role and the amazon-ssm-agent active. Additionally, we have set up a user with access to SSM and installed aws-cli to manage the instance remotely. To confirm the instance is accessible run the following command:

aws ssm describe-instance-information --instance-information-filter-list key=InstanceIds,valueSet={instanceid}

This command should return information regarding the instance.

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": false,
            "ComputerName": "ip-10.10.10.10.us-west-1.compute.internal",
            "PingStatus": "Online",
            "InstanceId": "{instanceid}",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.0.558.0",
            "IPAddress": "10.10.10.10",
            "PlatformType": "Linux",
            "LastPingDateTime": 1482355841.974
        }
    ]
}

Now that we have confirmed the agent is running properly on the instance it’s time to send remote commands.

Run Command

The Run Command, which offers a way to remotely manage instances using Amazon Elastic Compute Cloud (EC2), is one of the features provided by AWS SSM. To initiate a command on the instance you can send the command as follows:

command_id=$(aws ssm send-command --instance-ids "{instanceid}" --document-name "AWS-RunShellScript" --comment "Zypper Update" --parameters commands="sudo zypper -n up" --output text --query "Command.CommandId")

This will send the command “sudo zypper -n up” to all instances listed. It will trigger an update on the instance and return the output. The query option returns the CommandId. This is the ID we will use to retrieve the command status and output.

aws ssm list-command-invocations --command-id $command_id --details

You should see information about the command that was run. As a note, the output of the command is truncated after the first 2500 characters. To view the entire output you can configure the command to log output to an S3 bucket.

{
    "CommandInvocations": [
        {
            "Comment": "Zypper Update",
            "Status": "Success",
            "CommandPlugins": [
                {
                    "Status": "Success",
                    "ResponseStartDateTime": 1482355637.705,
                    "StandardErrorUrl": "",
                    "OutputS3BucketName": "",
                    "OutputS3Region": "us-west-1",
                    "OutputS3KeyPrefix": "",
                    "ResponseCode": 0,
                    "Output": "---Output truncated---",
                    "ResponseFinishDateTime": 1482355726.472,
                    "StatusDetails": "Success",
                    "StandardOutputUrl": "",
                    "Name": "aws:runShellScript"
                }
            ],
            "ServiceRole": "",
            "InstanceId": "{instanceid}",
            "DocumentName": "AWS-RunShellScript",
            "NotificationConfig": {
                "NotificationArn": "",
                "NotificationEvents": [],
                "NotificationType": ""
            },
            "StatusDetails": "Success",
            "StandardOutputUrl": "",
            "StandardErrorUrl": "",
            "InstanceName": "",
            "CommandId": "{commandid}",
            "RequestedDateTime": 1482355636.877
        }
    ]
}

As you can see the Run Command is useful for initiating tasks remotely on your instances. The send command function allows for a maximum of 50 instance IDs per invocation. It can also be used in conjunction with the other services such as Automation (auto create up-to-date images) and State Manager (periodically update instances).