SUSE Enterprise Storage 7
Release Notes #
SUSE Enterprise Storage provides a distributed storage architecture for many use cases that runs on commodity hardware platforms. SUSE Enterprise Storage combines Ceph with the enterprise engineering and support of SUSE. This document provides an overview of high-level general features, capabilities, and limitations of SUSE Enterprise Storage 7 and important product updates.
These release notes are updated periodically. The latest version is always available at https://www.suse.com/releasenotes. General documentation can be found at: https://documentation.suse.com/ses/7/.
1 About the Release Notes #
The most recent version of the Release Notes is available online at https://www.suse.com/releasenotes.
Entries can be listed multiple times if they are important and belong to multiple sections.
Release notes only list changes that happened between two subsequent releases. Always review all release notes documents that apply in your upgrade scenario.
2 SUSE Enterprise Storage #
SUSE Enterprise Storage 7 is an intelligent software-defined storage solution, powered by Ceph technology (https://ceph.com/).
Accelerate innovation, reduce costs, and alleviate proprietary hardware lock-in by transforming your enterprise storage infrastructure with an open and unified intelligent software-defined storage solution. SUSE Enterprise Storage allows you to leverage commodity hardware platforms for enterprise-grade storage. SUSE Enterprise Storage 7 is an extension to SUSE Linux Enterprise.
2.1 What Is New? #
SUSE Enterprise Storage 7 introduces many innovative changes compared to SUSE Enterprise Storage 6. The most important changes are listed below:
-
ceph-salt
andcephadm
. SUSE Enterprise Storage 7 introduces a new deployment stack that is built on two tools:ceph-salt
andcephadm
.ceph-salt
has been introduced to facilitate Day 1 Ceph cluster deployment. Like the DeepSea tool that it replaces, it uses Salt to prepare the cluster nodes and bootstrap a Ceph cluster on them. Unlike DeepSea, this tool only can only deploy a single Ceph Monitor daemon and a single Ceph Manager daemon on one node (this process is called bootstrapping). The rest of the deployment (additional Ceph Monitors and Managers, OSDs, Gateways, etc.) is handled by cephadm. cephadm has been introduced to allow Day 2 Ceph daemon deployment and management via containers into the orchestration layer. For more information, see https://documentation.suse.com/ses/7/html/ses-all/deploy-cephadm.html#deploy-cephadm-day2. -
Ceph release. SUSE Enterprise Storage 7 is based on Ceph Octopus v15.2.
2.2 Additional Release Notes Documents #
SUSE Enterprise Storage is an extension to SUSE Linux Enterprise Server 15 SP2. Make sure to review the SUSE Linux Enterprise Server release notes in addition to this document: https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP2/.
2.3 Support and Life Cycle #
SUSE Enterprise Storage 7 has been discontinued. Maintenance updates will be made available for pre-existing customers of SUSE Enterprise Storage only.
For more information, see the support policy at https://www.suse.com/support/policy.html and the SUSE lifecycle page at https://www.suse.com/lifecycle/.
2.4 Support Statement for SUSE Enterprise Storage #
This product has been discontinued. To receive support, you need to have a pre-existing subscription with SUSE. For more information, see https://www.suse.com/support/?id=SUSE_Enterprise_Storage.
The following definitions apply:
- L1
-
Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.
- L2
-
Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate problem area and provide a resolution for problems not resolved by Level 1 or prepare for Level 3.
- L3
-
Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.
For contracted customers and partners, SUSE Enterprise Storage 7 is delivered with L3 support for all packages, except for the following:
-
Technology Previews, see Section 3, “Technology Previews”
-
Sound, graphics, fonts and artwork
-
Packages that require an additional customer contract
-
Packages with names ending in -devel (containing header files and similar developer resources) will only be supported together with their main packages.
SUSE will only support the usage of original packages and container images. That is, packages and container images that are unchanged and not recompiled.
2.5 Documentation and Other Information #
2.5.1 On the Product Medium #
-
For general product information, see the file
README
in the top level of the product medium. -
For a chronological log of all changes made to updated packages, see the file
ChangeLog
in the top level of the product medium. -
Detailed change log information about a particular package is available using RPM:
rpm --changelog -qp FILE_NAME.rpm
(Replace FILE_NAME.rpm with the name of the RPM.)
-
For more information, see the directory
docu
of the product medium of SUSE Enterprise Storage 7.
2.5.2 Externally Provided Documentation #
-
https://documentation.suse.com/ses/7/ contains additional or updated documentation for SUSE Enterprise Storage 7.
-
Find a collection of White Papers in the SUSE Enterprise Storage Resource Library at https://www.suse.com/products/suse-enterprise-storage/#resources.
3 Technology Previews #
Technology previews are packages, stacks, or features delivered by SUSE which are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are included for your convenience and give you a chance to test new technologies within an enterprise environment.
Whether a technology preview becomes a fully supported technology later depends on customer and market feedback. Technology previews can be dropped at any time and SUSE does not commit to providing a supported version of such technologies in the future.
Give your SUSE representative feedback about technology previews, including your experience and use case.
-
CephFS:
-
cephfs-shell
tool for manipulating a CephFS file system without mounting. -
Support multiple independent CephFS deployments in a single cluster.
-
Remote file replication with CephFS.
-
-
ceph-volume
: Supports for the libstoragemgmt library. -
Ceph Manager The
zabbix
module. -
iSCSI: Support for the tcmu-runner for iSCSI.
tcmu-runner
is a daemon that handles the userspace side of the LIO TCM-User backstore. -
Object Gateway: RGW Multisite Bucket Granularity Sync provides fine grained control of data movement between buckets in different zones. It extends the zone sync mechanism.
-
RADOS Block Device: The RBD client on Windows provides a kernel driver for exposing RBD devices natively as Windows volumes, support for Hyper-V VMs, and CephFS.
4 Features #
This section includes an overview of new features of SUSE Enterprise Storage 7.
4.1 Ceph Orchestrator (cephadm
) #
SUSE Enterprise Storage 7 introduces a new way of deploying and managing a
Ceph cluster, based on an orchestrator back-end called
cephadm
.
-
Allows managing
ceph-core
(mgr
,mon
,osd
),gateway
(nfs-ganesha
,mds
,rgw
), and the monitoring stack in a declarative way.-
Ability to describe a cluster in a single file.
-
Supports Ceph Dashboard integration.
-
-
Feature parity with DeepSea.
4.2 ceph-mon
#
Monitors now have a configuration option
mon_osd_warn_num_repaired
, by default set to
10
.
If any OSD has repaired more I/O errors in stored data than the number
defined with the option, a OSD_TOO_MANY_REPAIRS
health
warning is generated.
4.3 ceph-mgr
(Modules) #
-
The PG autoscaler feature introduced in Nautilus is enabled for new pools by default, allowing new clusters to autotune
pg num
without any user intervention. The default values for new pools and RGW/CephFS metadata pools have also been adjusted to perform well for most users. -
Health alerts are now raised for recent Ceph daemons crashes.
-
A simple
alerts
module has been introduced to send email health alerts for clusters deployed without the benefit of an existing external monitoring infrastructure.
4.4 ceph-iscsi
#
You can now export RBD images with the
object-map
, fast-diff
, and
deep flatten
features enabled.
4.5 ceph-salt
#
DeepSea was replaced by ceph-salt
and
cephadm
.
ceph-salt
offers a rich command-line interface to
configure initial cluster with input validations, and a redesigned UI for
better user feedback during deployment and other operations.
In addition to features such as system updates and cluster shutdown,
ceph-salt
also provides a new orchestrated reboot of
the whole cluster feature.
4.6 Monitoring #
-
Automatic configuration of Prometheus, Alertmanager and Grafana to talk to each other and the Ceph Dashboard.
-
Per RBD graphs can now be displayed in Ceph Dashboard.
-
Ceph exporter (Prometheus manager module) enhancements: Cache overhauled and performance logs added to identify bottlenecks.
-
Customization of monitoring component configuration files is implemented through Jinja2 templates.
4.7 RADOS (Ceph Core) #
-
RADOS objects can now be brought in sync during recovery by copying only the modified portion of the object, reducing tail latencies during recovery.
-
RADOS snapshot trimming metadata is now managed in a more efficient and scalable fashion.
-
Now when noscrub and/or no deep-scrub flags are set globally or per pool, scheduled scrubs of the type disabled will be aborted. All user initiated scrubs are NOT interrupted.
-
BlueStore has received several improvements and performance updates, including improved accounting for
omap
(key/value) object data by pool, improved cache memory management, and a reduced allocation unit size for SSD devices. (Note that by default, the first time each OSD starts after upgrading to octopus it will trigger a conversion that may take from a few minutes to a few hours, depending on the amount of storedomap
data.) -
Ceph will allow recovery below
min_size
for erasure-coded pools, wherever possible.
4.8 NFS Ganesha #
-
NFS v4.1 protocol and newer is supported; NFS v3 is not supported
-
RADOS grace is now a supported recovery back-end
-
Now supports per-service configuration via a RADOS common configuration object
-
NFS Ganesha for FSAL Ceph now responds to cache pressure requests from libcephfs
4.9 RADOS Block Device (RBD) #
-
The name of the RBD pool object that is used to store RBD trash purge schedule is changed from
rbd_trash_trash_purge_schedule
torbd_trash_purge_schedule
.If you are already using RBD trash purge schedule functionality and have per-pool or per-namespace schedules configured, before the upgrade: Copy the
rbd_trash_trash_purge_schedule
object torbd_trash_purge_schedule
. Removerbd_trash_purge_schedule
using the following commands in every RBD pool and namespace where a trash purge schedule was previously configured:>
rados -p <pool-name> [-N namespace] cp \ rbd_trash_trash_purge_schedule rbd_trash_purge_schedule>
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_scheduleAlternatively, restore the schedule after the upgrade in another way.
-
RBD mirroring now supports a new snapshot-based mode that no longer requires the journaling feature and its related impacts in exchange for the loss of point-in-time consistency (it remains crash-consistent).
-
RBD clone operations now preserve the sparseness of the underlying RBD image.
-
The RBD trash feature has been improved to (optionally) automatically move old parent images to the trash when their children are all deleted or flattened. It can now be configured to automatically purge on a defined schedule.
-
RBD images can be online re-sparsified to reduce the usage of zeroed extents.
-
The
rbd-nbd
tool has been improved to use more modern kernel interfaces. Caching has been improved to be more efficient and performant. Therbd-mirror
daemon now automatically adjusts its per-image memory usage based upon its memory target.
4.10 Ceph Object Gateway #
-
Now supports the S3 Object lock feature, providing a write-once-read-many (WORM)-like functionality.
-
S3 API support for key value pairs on buckets similar to objects.
4.11 CephFS #
-
Automatic static subtree partitioning policies may now be configured using the new distributed and random ephemeral pinning extended attributes on directories.
-
MDS daemons can now be assigned to manage a particular file system via the new
mds_join_fs
. -
MDS now aggressively asks idle clients to trim caps which improves stability when file system load changes.
4.12 Samba Gateway #
-
Updated Samba and CTDB packages are carried on the SES media. These packages should be used for Samba Gateway deployments instead of base SLES packages.
-
The new version 4.13 Samba release provides significant performance improvements for encrypted SMB3 workloads.
-
CTDB now advertises clustered Samba presence in the Ceph Manager service map
4.13 Ceph Dashboard #
The Ceph Dashboard in SES 7 has received a number of updates and enhancements. These include both new features related to the dashboard itself, plus numerous new functionality for managing and monitoring Ceph.
Dashboard enhancements#
-
New page layout and branding. The Dashboard UI now uses a layout with a vertical navigation bar to the left that can be hidden to free up some screen real estate. The branding has been updated to match SUSE's new corporate CI.
-
A new unified tasks and notifications bar that shows both ongoing activity and background tasks running in the cluster as well as past notifications. These can be removed individually or all at once.
-
Many pages now support multi-row selection, to perform bulk actions on some or all elements at once.
-
Tables provide custom filters that enable you to further drill down into the data shown.
-
User accounts can now be disabled temporarily or permanently without the need to delete them.
-
It is now also possible to force users to change their initial password at the first login. Users can also change their passwords via the Dashboard without administrator intervention at any time.
-
The Dashboard can also enforce a variety of password complexity rules if required, or let passwords expire after a configurable amount of time.
-
Most of these password features are disabled by default. They can be enabled and configured individually to help adhering to any local password security policies that may be in force.
-
It is now possible to clone existing user roles to save time when creating new ones that are similar to already existing roles.
New features and improvements#
-
cephadm
/orchestrator integration: Display and filter information about the cluster and its inventory from a per-host, a device, or service perspective. -
Deploy new OSDs via
cephadm
based on customizable filter rules. -
Display all hosts known to the cluster and all their devices and services.
-
Obtain individual health status and SMART data.
-
Host inventory shows all disks attached, as well as their type, size and other details.
-
The host page shows all services that have been deployed on the selected host, which container they are running in and their current status.
-
Clicking the button
will help finding the selected device in a data center, by making the disk enclosure LED blink for a customizable amount of time. -
When creating a new pool, it is now possible to define CRUSH placement rules to specify device classes, so fast pools can be created on SSDs only, for example. This helps in creating different tiers of storage pools easily.
-
It is now possible to define the PG autoscaling mode on a per-pool basis, either by choosing the cluster's default, or selecting a different mode. For example, you could disable autoscaling or only emit a warning when the pool’s PG count is not ideal.
-
Defining per-pool quotas for both the amount of data in total or the number of objects that can be stored in a pool.
-
Object Gateway management now supports some new RGW features like versioned buckets, multi-factor authentication and the selection of placement targets when creating buckets.
-
On the CephFS management page, it is now possible to disconnect or “evict” clients from the list of active sessions and to create snapshots of a CephFS subtree manually.
-
Support for managing CephFS quotas was added as well as a simple filesystem browser that allows users to traverse the file system’s directory structure.
-
The iSCSI management pages were also improved, giving a more detailed insight into active iSCSI gateways and initiators now. Moreover, some safeguards around deleting IQNs with active sessions were added.
-
The integration with the Prometheus alert manager was enhanced, now showing all configured alerts, not just the current active ones.
-
A dedicated page/workflow was added to enable the submission of telemetry data.
5 Known Issues & Limitations #
This is a list of known issues and limitations for this release.
RADOS (Ceph Core):
-
Due to a regression in SUSE Enterprise Storage 7, the
-f plain
option to does not have any effect when given with theceph tell osd.* bench
command (the command produces JSON output only). -
The
ceph ping
command is known not to work properly in SUSE Enterprise Storage 7. -
SUSE Enterprise Storage 7 cannot deploy more than 62 OSDs per node in its default configuration. If you intend to update a cluster with very dense nodes (>62 OSDs per node), before proceeding, see https://documentation.suse.com/ses/7/html/ses-all/deploy-cephadm.html#deploy-min-cluster-final-steps.
cephadm
:
-
cephadm
generates aceph-<fsid>.target
unit file which can be used to start, stop, restart all Ceph daemons (containers) running on a node. However, it does not yet generate such target unit files for individual service types (for example,ceph-mon-<fsid>.target
,ceph-mgr-<fsid>.target
, orceph-osd-<fsid>.target
) like it did in SUSE Enterprise Storage 6 and earlier releases. -
The
rbd-mirror
daemon deployment is not fully automated. Manual steps are still required to get it up and running. -
ceph-iscsi
:trusted_ips
do not update automatically. A complete redeployment of the gateways is necessary. -
The format and some of the terminology of drive group service specifications has changed with the move to
cephadm
. This includes a rename of the key previously calledencryption
under DeepSea which is now calledencrypted
undercephadm
. Ensure that you use the new terminology.For an example drive group specification, see https://documentation.suse.com/ses/7/single-html/ses-admin/#drive-groups-specs.
Object Gateway:
-
The RGW service is not automatically integrated with the Dashboard and requires manual configuration of the Object Gateway Frontend.
-
RGW Multisite is not interoperable between SUSE Enterprise Storage 7 and SUSE Enterprise Storage 5.5, as the feature is not compatible between Ceph Luminous and Ceph Octopus.
NFS Ganesha:
-
There is an error when submitting an NFS export in the Ceph Dashboard by specifying
/
in the path. -
There is an error when typing existing folder name in the NFS-Ganesha form in the Ceph Dashboard.
-
Due to the configuration change of Ganesha daemons and some out-of-date codes in the Dashboard, the Dashboard is unable to manage Ganesha daemons deployed by
cephadm
. The reported daemons are wrong and the service name is reported, which is misleading. -
The Dashboard does not block the user from managing exports when doing NFS migration from Nautilus to Octopus.
-
The current nfs-ganesha configuration does not easily allow for binding to the VIP. As a result, high availability with an active-passive configuration is not supported.
AppArmor:
-
cephadm
currently does not automatically install AppArmor profiles for various Ceph daemons. This known issue is being worked on in the Ceph community. Ceph daemons running in non-privileged containers are confined by the generic containers-default profile. -
Do not run the
aa-teardown
command while any Ceph containers are active.
6 Unsupported, Deprecated, and Removed Features #
-
ceph-volume
:dmcache
integration with Ceph is not supported in SUSE Enterprise Storage 7. With the introduction of BlueStore,dmcache
is no longer needed or supported byceph-volume
. -
The
restful
module is deprecated in favor of the Ceph Dashboard REST API (also known as the Ceph REST API).The Ceph Dashboard REST API back-end has gained a significant amount of functionality over the original
restful
module. -
Support for FileStore has been removed from SUSE Enterprise Storage 7.
Before upgrading from SUSE Enterprise Storage 6 to SUSE Enterprise Storage 7, migrate all FileStore OSDs to BlueStore.
-
The
lvmcache
plugin that was included as a technology preview in SUSE Enterprise Storage 6 has been removed entirely from SUSE Enterprise Storage 7. -
Inline data support in CephFS has been deprecated
-
The
radosgw-admin
subcommands dealing with orphans have been deprecated. In particular, this affectsradosgw-admin orphans find
,radosgw-admin orphans finish
, andradosgw-admin orphans list-jobs
. They have not been actively maintained and they store intermediate results on the cluster, which could fill a nearly-full cluster.They have been replaced by a new tool, currently considered experimental, called
rgw-orphan-list
. -
NFS Ganesha does not currently support RGW exports.
7 Obtaining Source Code #
This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material.
The source code is available for download at https://www.suse.com/download/ses/ on Medium 2. For up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Send requests by e-mail to mailto:sle_source_request@suse.com. SUSE may charge a reasonable fee to recover distribution costs.
8 Legal Notices #
SUSE makes no representations or warranties with regard to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to revise this publication and to make changes to its content, at any time, without the obligation to notify any person or entity of such revisions or changes.
Further, SUSE makes no representations or warranties with regard to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, SUSE reserves the right to make changes to any and all parts of SUSE software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classifications to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical/biological weaponry end uses. Refer to https://www.suse.com/company/legal/ for more information on exporting SUSE software. SUSE assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2010-2022 SUSE LLC. Portions of this document are © Red Hat, Inc, and contributors
This release notes document is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA-4.0). You should have received a copy of the license along with this document. If not, see https://creativecommons.org/licenses/by-sa/4.0/.
SUSE has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at https://www.suse.com/company/legal/ and one or more additional patents or pending patent applications in the U.S. and other countries.
For SUSE trademarks, see SUSE Trademark and Service Mark list (https://www.suse.com/company/legal/). All third-party trademarks are the property of their respective owners.