SUSE OpenStack Cloud 6

Release Notes

These release notes are generic for all SUSE OpenStack Cloud 6 components. Some parts may not apply to a particular component.

Documentation can be found in the docu language directories on the media. Documentation (if installed) is available below the /usr/share/doc/ directory of an installed system. The latest documentation can also be found online at http://www.suse.com/documentation/cloud/.

Publication Date: 2016-03-03 , Version: 6.20160222

1 SUSE OpenStack Cloud

Powered by OpenStack™, SUSE OpenStack Cloud is an open source enterprise cloud computing platform that enables easy deployment and seamless management of an Infrastructure-as-a-Service (IaaS) private cloud.

2 Support Statement for SUSE OpenStack Cloud

To receive support, customers need an appropriate subscription with SUSE; for more information, see http://www.suse.com/products/server/services-and-support/.

3 Major Changes in SUSE OpenStack Cloud 6

SUSE OpenStack Cloud 6 is a major update to SUSE OpenStack Cloud and comes with many new features, improvements and bug fixes. The following list highlights a selection of the major changes:

  • OpenStack has been updated to the 2015.2 (Liberty) release (https://wiki.openstack.org/wiki/ReleaseNotes/Liberty), and the deployment framework has been updated accordingly to support new features. On top of the new features that come by default with OpenStack 2015.1 (Kilo) (https://wiki.openstack.org/wiki/ReleaseNotes/Kilo) and OpenStack 2015.2 (Liberty) (https://wiki.openstack.org/wiki/ReleaseNotes/Liberty), here are some notable features that have been added:

    • The File Share Module for OpenStack (Manila) is fully integrated, and the controller side can be deployed with High Availability.

    • OpenStack Bare Metal (Ironic) and DNS-as-a-Service for OpenStack (Designate) are available as technology preview. They must be installed and configured manually, though.

    • The Docker driver for OpenStack Compute (Nova) is available as technology preview. Moreover, Docker resources are available for use in OpenStack Orchestration (Heat).

    • The z/VM driver for OpenStack Compute (Nova) and OpenStack Networking (Neutron) is available.

    • Multiple external networks can be defined for OpenStack Networking (Neutron), instead of a unique floating network.

    • Distributed Virtual Routers (DVR) for OpenStack Networking (Neutron) are fully supported, and can be used with VLAN.

    • The OpenStack Dashboard (Horizon) can be configured to allow users from multiple domains to log in.

    • OpenStack Block Storage (Cinder) can be used as default backend for OpenStack Image (Glance).

    • OpenStack Block Storage (Cinder) and OpenStack Networking (Neutron) are now configured to be much quieter in syslog.

    • The default API for OpenStack Identity (Keystone) is now the v3 API. As a side-effect, the openstack command line utility should be preferred to the keystone command line utility as the latter does not work with the v3 API.

    • The default backend for OpenStack Identity (Keystone) was changed to the UUID provider in response to Keystone PKI Token Revocation Bypass (CVE-2015-7546). It is recommended to no longer use the PKI backend, but it is still available as an option.

    • The plugin to deploy the Tempest OpenStack test suite is now installed by default. It is not officially supported, but it can be used to check that the deployment of OpenStack is functional.

    • Several expert settings have been added, such as:

      • the ability to define the token expiration for OpenStack Identity (Keystone);

      • the ability to define a custom RBAC policy and domain-specific drivers for OpenStack Identity (Keystone);

      • the ability to enable the v3 API of OpenStack Image (Glance);

      • the ability to convert images on import in OpenStack Image (Glance);

      • the ability to use multiple VMware clusters and datastores in OpenStack Block Storage (Cinder) and OpenStack Image (Glance);

      • the ability to directly initialize volumes from images using the OpenStack Block Storage (Cinder) backend, bypassing the OpenStack Image (Glance) API;

      • the ability to define the default volume type for OpenStack Block Storage (Cinder);

      • the ability to not automatically create the fixed and floating networks in OpenStack Networking (Neutron);

      • the ability to enforce the use of config drive in OpenStack Compute (Nova);

      • the ability to use the new launch instance dialog in OpenStack Dashboard (Horizon);

      • the ability to use the convergence engine in OpenStack Orchestration (Heat);

      • and many more!

  • The Administration Server and all nodes used for OpenStack are now using SUSE Linux Enterprise Server 12 SP1 as operating system.

  • SUSE OpenStack Cloud 6 integrates with SUSE Enterprise Storage 2.1. It can either deploy Ceph as an integrated solution of SUSE OpenStack Cloud 6 or can connect to an externally deployed SUSE Enterprise Storage cluster. Ceph support requires a subscription for SUSE Enterprise Storage.

  • The Crowbar deployment framework also comes with several highlights:

    • The web application can now be started directly after the package installation, and the installation process of the product can be initiated from the browser. Manual installation via console is still supported.

    • Management of package repositories for nodes has been simplified. The repositories can now easily be checked, enabled and disabled on a single web page, instead of relying on checks at installation time.

    • Backups of the data for the deployment infrastructure can now be created and restored directly from the web interface.

    • Support for nodes from multiple architectures is now possible. SUSE OpenStack Cloud currently supports only x86_64 for the nodes it fully manages, but also integrates with IBM z Systems to enable running instances on this architecture.

    • The backend was re-architected to enable non-disruptive upgrades to new versions of OpenStack.

    • Filesystem type that will be used when installing a node can be specified at allocation time.

    • Nodes with large disks (> 4 TB) can now correctly be installed.

    • More efficient locking when applying proposals to avoid issues due to concurrent runs of the application process.

    • The web application listens on port 80 by default, instead of the unexpected port 3000.

    • A new command line utility (crowbarctl) has been integrated, which will eventually replace the current command line utility (crowbar). It can be installed on every machine that has access to the admin network.

  • Various improvements to High Availability support have been included:

    • Compute nodes can be made highly available, enabling automatic evacuation of instances from a failed compute node. This is enabled by the new ability to add remote nodes to a Pacemaker cluster.

    • The deployment orchestration is more reliable when High Availability is used, with better synchronization of work on the various members of the cluster and the use of transactions to create batches of Pacemaker resources.

    • DRBD now uses peer authentication, and is configured with a variable synchronization rate for faster initial synchronization.

    • The Hawk web interface for Pacemaker has been updated to version 2, with a refreshed look and experience.

4 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.

Whether a technology preview will be moved to a fully supported package later, depends on customer and market feedback. A technology preview does not automatically result in support at a later point in time. Technology previews could be dropped at any time and SUSE is not committed to provide a technology preview later in the product cycle.

Please, give your SUSE representative feedback, including your experience and use case.

SUSE OpenStack Cloud 6 ships with the following technology previews:

  • Database-as-a-Service for OpenStack (Trove), and the respective Crowbar barclamp for deploying it.

  • OpenStack Bare Metal (Ironic).

  • Data Processing for OpenStack (Sahara).

  • DNS-as-a-Service for OpenStack (Designate).

  • Docker driver in Nova.

  • EqualLogic driver for Cinder.

  • MongoDB, as database for Ceilometer.

5 Deprecated Features

The following features are deprecated as of SUSE OpenStack Cloud 6:

  • Following the upstream deprecation that started in OpenStack 2014.1 (Icehouse), the XML format for OpenStack APIs is deprecated and unsupported. Migrating to the JSON format for the APIs is highly recommended. Most clients should not be impacted, as the most widely used client libraries are already using the JSON format.

  • The crowbar command line utility is deprecated in favor of the crowbarctl command line utility.

6 Upgrading to SUSE OpenStack Cloud 6

Upgrading to SUSE OpenStack Cloud 6 is supported from SUSE OpenStack Cloud 5, with the latest updates applied. If running a previous version, please first upgrade to SUSE OpenStack Cloud 5. If running without the updates, please first apply them.

The upgrade is done via a Web interface guiding you through the process. The process requires downloading a data dump from the Administration Server of SUSE OpenStack Cloud 5 and re-installing with this data dump the Administration Server with SUSE OpenStack Cloud 6.

As the OpenStack infrastructure will be turned off for the upgrade, it is important to suspend all running instances during the upgrade. However, it is not necessary to do so at the beginning of the upgrade procedure, as this step can be postponed until after the Administration Server has been upgraded to SUSE OpenStack Cloud 6, in order to keep the downtime as short as possible.

It is also highly recommended to perform a backup of the OpenStack data.

The complete upgrade process is documented in the Deployment Guide, which can be found online at http://www.suse.com/documentation/cloud/.

7 Documentation and Other Information

  • Read the READMEs on the DVDs.

  • Get the detailed changelog information about a particular package from the RPM (with filename <FILENAME>):

    rpm --changelog -qp <FILENAME>.rpm
        
  • Check the ChangeLog file in the top level of DVD1 for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of DVD1 of the SUSE OpenStack Cloud 6 DVDs. This directory includes PDF versions of the SUSE OpenStack Cloud documentation.

  • http://www.suse.com/documentation/cloud/ contains additional or updated documentation for SUSE OpenStack Cloud.

  • Visit http://www.suse.com/products/ for the latest product news from SUSE and http://www.suse.com/download-linux/source-code.html for additional information on the source code of SUSE Linux Enterprise products.

8 Limitations

  • The SLES 12 SP1 nodes deployed through SUSE OpenStack Cloud are not compatible with the Public Cloud Module for SLES 12 SP1, because SUSE OpenStack Cloud provides more recent versions of the OpenStack client tools.

  • The x86_64 architecture is the only supported architecture for the Administration Server and the nodes managed by SUSE OpenStack Cloud. Please note that IBM z Systems integration relies on the OpenStack Compute (Nova) driver that translates commands to z/VM, and that is running on a x86_64 node. More details about how to setup the IBM z Systems integration are available in the Deployment Guide.

9 Known Issues

  • The upgrade from SUSE OpenStack Cloud 5 depends on a maintenance update for SUSE OpenStack Cloud 5 that is in the process of being released.

  • Infoblox support is not yet available in SUSE OpenStack Cloud 6, so upgrading from a SUSE OpenStack Cloud 5 with Infoblox integration is not recommended for the time being.

  • Maintenance updates for the SLE HA Extension are required to make use of High Availability for compute nodes. They are in the process of being released.

  • Ceilometer integration for Hyper-V compute nodes is not fully functional.

  • In some cases, using High Availability with multicast transport on Neutron L3 nodes is causing issues due to conflicts with the networking configuration required by Neutron. This can lead, in the worst case, to breakage of the High Availability cluster. It is advised to use the unicast transport for High Availability to avoid this.

  • Live migration of instances only work between homogeneous compute nodes: the nodes need to have the same CPU features.

  • Removal of barclamps from a node do not necessarily shut down associated services or remove associated packages. This means that you may well run into problem if moving barclamp roles from one node to another. Manual remediation may be required in these cases.

  • No pre-built image for Heat or Trove is shipped with SUSE OpenStack Cloud; cloud administrators are responsible for creating such images.

10 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.

Print this page