SUSE Enterprise Storage 6
Release Notes #
SUSE Enterprise Storage provides a distributed storage architecture for many use cases that runs on commodity hardware platforms. SUSE Enterprise Storage combines Ceph with the enterprise engineering and support of SUSE. This document provides an overview of high-level general features, capabilities, and limitations of SUSE Enterprise Storage 6 and important product updates.
These release notes are updated periodically. The latest version is always available at https://www.suse.com/releasenotes. General documentation can be found at: https://documentation.suse.com/ses/6/.
1 About the Release Notes #
The most recent version of the Release Notes is available online at https://www.suse.com/releasenotes.
Entries can be listed multiple times if they are important and belong to multiple sections.
Release notes only list changes that happened between two subsequent releases. Always review all release notes documents that apply in your upgrade scenario.
2 SUSE Enterprise Storage #
SUSE Enterprise Storage 6 is an intelligent software-defined storage solution, powered by Ceph technology (https://ceph.com/), which enables you to transform your enterprise storage infrastructure. It provides IT organizations with a simple-to-manage, agile infrastructure with increased speed of delivery, durability, and reliability.
Accelerate innovation, reduce costs, and alleviate proprietary hardware lock-in by transforming your enterprise storage infrastructure with an open and unified intelligent software-defined storage solution. SUSE Enterprise Storage allows you to leverage commodity hardware platforms for enterprise-grade storage. SUSE Enterprise Storage 6 is an extension to SUSE Linux Enterprise.
2.1 What Is New? #
SUSE Enterprise Storage 6 introduces many innovative changes compared to SUSE Enterprise Storage 5.5. The most important changes are listed below:
-
Ceph release. SUSE Enterprise Storage 6 is based on Ceph Nautilus v14.2.1.
-
Ceph Dashboard. The Ceph Dashboard replaces openATTIC for managing and monitoring Ceph through a web interface. Inspired by and derived from openATTIC, it provides the same functionality, plus more.
-
iSCSI Target Management. The
ceph-iscsi
framework replaceslrbd
for managing iSCSI targets.
2.2 Additional Release Notes Documents #
SUSE Enterprise Storage is an extension to SUSE Linux Enterprise Server 15 SP1. Make sure to review the SUSE Linux Enterprise Server release notes in addition to this document: https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP1/.
2.3 Support and Life Cycle #
SUSE Enterprise Storage is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.
SUSE Enterprise Storage 6 will be fully maintained and supported until 3 months after the release of SUSE Enterprise Storage 8.
For more information, see the support policy at https://www.suse.com/support/policy.html.
2.4 Support Statement for SUSE Enterprise Storage #
To receive support, you need an appropriate subscription with SUSE. For more information, see https://www.suse.com/support/?id=SUSE_Enterprise_Storage.
The following definitions apply:
- L1
-
Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.
- L2
-
Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate problem area and provide a resolution for problems not resolved by Level 1 or prepare for Level 3.
- L3
-
Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.
For contracted customers and partners, SUSE Enterprise Storage 6 is delivered with L3 support for all packages, except for the following:
-
Technology Previews, see Section 3, “Technology Previews”
-
Sound, graphics, fonts and artwork
-
Packages that require an additional customer contract
-
Some packages shipped as part of the module Workstation Extension are L2-supported only
-
Packages with names ending in -devel (containing header files and similar developer resources) will only be supported together with their main packages.
SUSE will only support the usage of original packages. That is, packages that are unchanged and not recompiled.
2.5 Documentation and Other Information #
2.5.1 On the Product Medium #
-
For general product information, see the file
README
in the top level of the product medium. -
For a chronological log of all changes made to updated packages, see the file
ChangeLog
in the top level of the product medium. -
Detailed change log information about a particular package is available using RPM:
rpm --changelog -qp FILE_NAME.rpm
(Replace FILE_NAME.rpm with the name of the RPM.)
-
For more information, see the directory
docu
of the product medium of SUSE Enterprise Storage 6.
2.5.2 Externally Provided Documentation #
-
https://documentation.suse.com/ses/6/ contains additional or updated documentation for SUSE Enterprise Storage 6.
-
Find a collection of White Papers in the SUSE Enterprise Storage Resource Library at https://www.suse.com/products/suse-enterprise-storage/#resources.
3 Technology Previews #
Technology previews are packages, stacks, or features delivered by SUSE which are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are included for your convenience and give you a chance to test new technologies within an enterprise environment.
Whether a technology preview becomes a fully supported technology later depends on customer and market feedback. Technology previews can be dropped at any time and SUSE does not commit to providing a supported version of such technologies in the future.
Give your SUSE representative feedback about technology previews, including your experience and use case.
The following technologies are released as technology previews in SUSE Enterprise Storage 6:
- Ceph Core
-
-
Decreasing number of PGs per pool.
-
Automatic tuning of PG count based on cluster utilization or administrator hints.
-
Added the Coupled-Layer (Clay) experimental erasure code plug-in.
-
- Ceph Manager
-
-
pg_autoscaler
module -
zabbix
module.
-
- CephFS
-
-
ceph fs volume
command-line interface for creating volumes (not to be confused with theceph-volume
command). -
cephfs-shell
tool for manipulating a CephFS file system without mounting
-
- ceph-volume
-
-
VM cache integration with Ceph (
ceph-volume
plugin).
-
- iSCSI
-
-
tcmu-runner
RBD iSCSI back-store
-
4 Features #
This section includes an overview of new features of SUSE Enterprise Storage 6.
4.1 DeepSea #
-
DeepSea disk profiles are replaced by DriveGroups:
-
There is a separate role-storage (as opposed to the implicit role-storage assigned via the storage profiles).
-
The
runner
is replaced by thedisks
runner.
-
-
Grafana and Prometheus now have their dedicated role in DeepSea.
-
Remove and Replace processes of OSDs have been reworked:
-
remove.osd
is nowosd.remove
-
replace.osd
is nowosd.replace
-
-
Rebuild/Migration of OSD nodes has been reworked.
4.2 iSCSI Gateway #
The ceph-iscsi
framework replaces
lrbd
for managing iSCSI
targets. ceph-iscsi
uses the command line interface
gwcli
and the Ceph Dashboard via the REST API provided
by the rbd-target-api
service. See https://ceph.com/community/new-in-nautilus-ceph-iscsi-improvements/
for details.
4.3 NFS-Ganesha #
The configuration for NFS-Ganesha exports are now stored as RADOS objects
in the Ceph cluster (in SUSE Enterprise Storage 5.5 NFS-Ganesha exports
were configured in /etc/ganesha/ganesha.conf
). Each
export is stored in a single RADOS object, and each NFS-Ganesha daemon has
a single RADOS object containing references to the respective export
object(s). The Ceph Dashboard can be used to manage exports.
4.4 Monitoring #
Ceph metrics are now collected by the prometheus
Ceph manager module instead of the standalone Prometheus exporter.
4.5 Ceph Dashboard #
The Ceph Dashboard replaces openATTIC and provides the following additional new features:
-
Support for multiple users / roles: The dashboard supports multiple user accounts with different permissions (roles). The user accounts and roles can be modified on both the command line and via the WebUI.
-
Single Sign-On (SSO): the dashboard supports authentication via an external identity provider using the SAML 2.0 protocol.
-
Auditing: the dashboard back-end can be configured to log all PUT, POST and DELETE API requests in the Ceph audit log.
-
SSL/TLS support: All HTTP communication between the web browser and the dashboard is secured via SSL.
-
New landing page, showing more metrics and health info.
-
Extended I18N support (Languages include de_DE, es_ES, fr_FR, id_ID, it_IT, ja_JP, pl_PL, pt_BR, zh_CN, zh_TW).
-
REST API documentation with the Swagger API. This makes the REST API self-documenting and makes it possible to quickly test REST API calls via the web browser, if you want to perform any management tasks via a custom script or application. The Dashboard REST API supports version 3 of the OpenAPI spec, which can be obtained from
https://HOST:PORT/api.json
. See https://github.com/OAI/OpenAPI-Specification for more details.The Swagger-based API documentation can be accessed from the dashboard via the help (questionmark) icon in the top right of the dashboard. This opens the REST API documentation in a new browser window/tab. It can also be accessed directly via
https://HOST:PORT/docs
. -
Cluster logs: Display the latest updates to the cluster’s event and audit log files.
-
Configuration Editor: View all available configuration options, their description, type and default values and edit the current values.
-
Monitors: Lists all MONs, their quorum status and open sessions.
-
RBD mirroring: Enable and configure RBD mirroring to a remote Ceph server. Lists all active sync daemons and their status, pools and RBD images including their synchronization state.
-
CephFS: Lists all active file system clients and associated pools, including their usage statistics.
-
Object Gateway: Lists all active object gateways and their performance counters.
-
Ceph Manager Modules: Enable and disable all Ceph Manager modules, change the module-specific configuration settings.
-
iSCSI improvements:
-
Modifications are now possible with delta changes (a change in one target will not cause downtime in other targets).
-
Added support for managing
tcmu-runner
back-store (technology preview). -
Removed dependency on Salt.
-
Added an iSCSI overview page,
-
It is now possible to see the number of active sessions per target or node.
-
-
NFS-Ganesha Management:
-
Directly manages NFS-Ganesha exports without depending on Salt.
-
Daemon is reloaded via RADOS object notifications.
-
-
Support for configuring global OSD flags,
-
OSD management
-
Mark OSDs as up/down/out.
-
Perform scrub operations.
-
Select between different recovery profiles to adjust the level of backfilling activity.
-
-
Support for embedded Grafana 5.x dashboards, which have been updated to support new metrics collected by the
prometheus
Ceph manager module. -
Prometheus Alert-Manager notifications and alert listings.
4.6 RADOS (Ceph core) #
-
Monitors: Daemons now use significantly less disk space when undergoing recovery or rebalancing operations.
-
Bluestore
-
More detailed space utilization statistics for (newly created) OSDs.
-
Now alerts about BlueFS spillover, legacy stats, lack of compressor plugins and main device size mismatch.
-
The default allocator is now set to bitmap.
-
Added repair capability.
-
Now supports TRIM/DISCARD for devices.
-
-
ceph-objectstore-tool:
-
Added BlueStore main device expansion capability.
-
Added BlueFS volumes migration capability.
-
-
OSD:
-
Memory usage is now autotunable and controlled via
osd_memory_...
options. -
Effectively prioritize the most important PGs and objects when performing recovery and backfill.
-
NUMA node can easily be monitored via the
ceph osd numa-status
command, and configured via theosd_numa_node
configuration option. -
A new async recovery feature now reduces the tail latency of requests when the OSDs are recovering from a recent failure.
-
Scrub by conflicting requests is now preempted, leading to a reduced tail latency.
-
-
Management/Usability:
-
Configuration options can now be centrally stored and managed by the Ceph Monitors.
-
Physical storage devices consumed by OSD and monitor daemons are now tracked by the cluster, along with health metrics (S.M.A.R.T.). These are supported via the Ceph Manager
devicehealth
module, and via command line interaction. -
Progress for long-running background processes—such as recovery after a device failure—is now reported as part of
ceph status
.
-
-
Operations:
-
The default value for
mon_crush_min_required_version
has been changed fromfirefly
tohammer
, which means the cluster will issue a health warning if your CRUSH tunables are older thanhammer
. There will be a small (but non-zero) amount of data that will move around by making the switch tohammer
tunables. -
If possible, we recommend that you set the oldest allowed client to
hammer
or later. You can tell what the current oldest allowed client is by issuing the following command:ceph osd dump | grep min_compat_client
If the current value is older than
hammer
, you can tell whether it is safe to make this change by verifying that there are no clients older thanhammer
currently connected to the cluster by issuing the following command:ceph features
-
The newer straw2 CRUSH bucket type was introduced in hammer, and ensuring that all clients are
hammer
or newer allows new features only supported for straw2 buckets to be used, including the crush-compat mode for the Balancer.
-
-
SUSE Enterprise Storage 6 includes the new Messenger protocol version 2 (also called the “wire protocol”). The new protocol version brings support for on-the-wire encryption. Ceph daemons use this protocol (often abbreviated to "msgr") internally to communicate with one another.
Upon initial release of SUSE Enterprise Storage 6, this protocol version was included as a technology preview. However, it is now considered fully supported.
4.7 Ceph Object Gateway (RGW) #
A new RGW front-end, called beast
and based on boost
(https://github.com/boostorg/beast), is now available and
recommended as a replacement for civetweb
. While
civetweb
will continue to be supported, upstream RGW
development will focus on beast
in the foreseeable
future. To replace civetweb
with
beast
change the configuration from
rgw frontends = civetweb port = 80
to
rgw frontends = beast port=80
RGW now supports S3 life cycle transition for tiering between storage classes. It can now also replicate a zone (or a subset of buckets) to an external cloud storage service like S3 supporting AWsv2 authentication
4.8 CephFS #
-
Snapshots are now stable when combined with multiple MDS daemons.
-
MDS stability has been greatly improved for large caches and long-running clients with a lot of RAM. Cache trimming and client capability recall is now throttled to prevent overloading the MDS.
-
The MDS configuration options
mds_standby_for_*
,mon_force_standby_active
, andmds_standby_replay
are now obsolete. Instead, the operator may now be the newallow_standby_replay
flag on the CephFS file system. This setting causes standbys to become standby-replay for any available rank in the file system. -
The MDS now supports dropping its cache which concurrently asks clients to trim their caches. This is done using MDS admin socket
cache drop
command. -
It is now possible to check the progress of an on-going scrub in the MDS. Additionally, a scrub may be paused or aborted.
-
A new interface for creating volumes is provided via the
ceph volume
(not to be confused with theceph-volume
command) command line interface (technology preview). -
A new cephfs-shell tool is available for manipulating a CephFS file system without mounting (technology preview).
-
CephFS-related output from ceph status has been reformatted for brevity, clarity, and usefulness.
-
Lazy IO has been revamped. It can be turned on by the client using the new
CEPH_O_LAZY
flag to the ceph_open C/C++ API or via the config optionclient_force_lazyio
. -
The CephFS file system can now be brought down rapidly via the
ceph fs fail
command.
4.9 Ceph Manager Modules #
The following list of modules (plugins) for the Ceph Manager is supported. Modules not listed here are not supported.
prometheus
dashboard
balancer
orchestrator_cli
iostat
crash
telemetry
progress
volumes
status
devicehealth
restful
rbd_support
pg_autoscaler (technology preview)
zabbix (technology preview)
4.10 RADOS Block Device (RBD) #
-
Image clones no longer require explicit
protect
andunprotect
steps. -
Images can be deep-copied (including any clone linkage to a parent image and associated snapshots) to new pools or with altered data layouts.
-
Images can be live-migrated with minimal downtime to assist with moving images between pools or to new layouts.
-
New rbd perf image iotop and rbd perf image iostat commands provide an iotop- and iostat-like IO monitor for all RBD images.
-
The Ceph Manager module
prometheus
now optionally includes an IO monitor for all RBD images. -
Support for separate image namespaces within a pool for tenant isolation.
4.11 Samba Gateway #
-
New
ceph_snapshots
VFS module, to expose CephFS snapshots as Previous Versions in Windows Explorer. -
Samba shares can be backed by a kernel CephFS mount point, as a faster but less flexible alternative to
vfs_ceph
. -
SMB2+ leases are supported when share paths are only accessed via Samba.
5 Known Issues #
This is a list of known issues for this release.
-
Upgrading from openATTIC to the Ceph Dashboard will not migrate existing user accounts/passwords.
-
The Ceph Dashboard does not provide a feature similar to the
API recorder
of openATTIC. It created a customizable snippet of Python code containing all REST API calls performed via the UI, while the recorder was running. -
RBD QoS is not supported by the kRBD back-end used by
ceph-iscsi
by default (needstcmu-runner
). -
RGW Bucket Life cycle policies mentioning a non-existent storage classes will always transition to the
standard
(the default) storage class in the placement policy. -
CephFS kernel clients have limitations handling a large number of snapshots in a directory tree (more than 400). The SLE15-SP1 CephFS kernel client is able to gracefully handle scenarios where more than 400 snapshots exist, but it is suggested that the number of snapshots is kept below this limit, specially if older CephFS clients (such as SLE12-SP3) are expected to access the cluster.
-
If a cluster is upgraded from SES 5. to SES 6 only LVM based OSDs can be created. This introduces the requirement to migrate shared devices from native to LVM in case a singe OSD having DB+WAL on that shared device needs to be replaced (for example because of a disk failure). Removing such shared devices from Ceph is possible, but re-adding only works if the shared device is entirely LVM based. Therefore, if such an OSD needs to be replaced, all OSDs having WAL+DB on the shared device need to be replaced in parallel.
-
The current NFS-Ganesha configuration does not easily allow for binding to the VIP. As a result, high availability with an active-passive configuration is not supported.
6 Deprecated and Removed Features #
6.1 No Automated Upgrade Procedure from 5.5 to 6 #
Upgrading from SUSE Enterprise Storage 5.5 to version 6 requires manual
intervention. As a consequence, the deepsea command
ceph.maintenance.upgrade
is no longer available.
Please refer to the Deployment Guide,
Chapter 5: Upgrading from Previous Releases for
detailed upgrade instructions.
6.2 libradosstriper Has Been Removed #
libradosstriper
is no longer part of the
recommended and supported Ceph interfaces upstream.
SUSE Enterprise Storage 4 and earlier already did not utilize or
advertise libradosstriper
. Aligning with upstream
development, it was deprecated in SUSE Enterprise Storage 5 and removed
in SUSE Enterprise Storage 6.
6.3 openATTIC Has Been Replaced by the Ceph Dashboard #
openATTIC was removed and replaced by the Ceph Dashboard as the primary management/monitoring user interface. The Ceph Dashboard provides a web interface for managing and monitoring Ceph. Inspired by and derived from openATTIC, it provides the same functionality, plus more.
6.4 lrbd Has Been replaced with ceph-iscsi #
lrbd
was replaced by
ceph-iscsi
for managing iSCSI targets via the ceph-iscsi
command line interface gwcli
and the Ceph Dashboard via
the REST API provided by the rbd-target-api
service. lrbd
will be removed in SUSE Enterprise Storage
7.
6.5 FileStore will be deprecated with SES 7 #
In SES 7 the OSD back-end FileStore will no longer be available or supported. It will be replaced by BlueStore, which is already the default OSD back-end since SES 5.
If you are still using FileStore and are planning to update to SES 7, you need to migrate to BlueStore prior to the upgrade. Refer to https://documentation.suse.com/ses/6/html/ses-all/cha-ceph-upgrade.html#filestore2bluestore for instructions.
6.6 Removed standby-for options #
The MDS mds_standby_for_*
, mon_force_standby_active
,
and mds_standby_replay
configuration options have been removed.
Instead, the operator may now set the new allow_standby_replay
flag
on the CephFS file system. This setting causes standbys to become standby-replay
for any available rank in the file system.
7 Obtaining Source Code #
This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.
8 Legal Notices #
SUSE makes no representations or warranties with regard to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.
Further, SUSE makes no representations or warranties with regard to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Refer to https://www.suse.com/company/legal/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2010-2021 SUSE LLC. Portions of this document are © Red Hat, Inc, and contributors
This release notes document is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA-4.0). You should have received a copy of the license along with this document. If not, see https://creativecommons.org/licenses/by-sa/4.0/.
SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at https://www.suse.com/company/legal/ and one or more additional patents or pending patent applications in the U.S. and other countries.
For SUSE trademarks, see SUSE Trademark and Service Mark list (https://www.suse.com/company/legal/). All third-party trademarks are the property of their respective owners.