SUSE Support

Here When You Need Us

Uploading data using s3cmd fails with "S3 error: 416 (InvalidRange)"

This document (7023728) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Enterprise Storage 5

Situation

After deployment of a Rados Gateway and when then attempting to upload data using the s3cmd utility the upload fails with "S3 error: 416 (InvalidRange)".

Resolution

Add additional Object Storage Daemons (OSDs) to the cluster or increase the default value of "mon_max_pg_per_osd" higher than 200.

Cause

After deployment of the Rados Gateway, required Pools are created as needed, in this case it was not possible to create the "<zone>.rgw.buckets.data" pool automatically since doing so would have resulted in the default "mon_max_pg_per_osd" setting of 200 from being exceeded.

Additional Information

Since the release of SUSE Enterprise Storage 5 (based on the Ceph Luminous release) there is a configuration setting "mon_max_pg_per_osd" that limits the amount of PGs (Placement Groups) per OSD to 200. Attempting to create new Pools where the new amount of PGs per OSD will be more than 200 will fail. In this case the following error will be logged in the "/var/log/ceph/ceph-client.rgw.<host_name>.log":

0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)

To increase the value of "mon_max_pg_per_osd" to for example 300, add the line to the clusters main configuration file "/etc/ceph/ceph.conf". To do this take the following steps from the DeepSea node:

- If it does not yet exist create the file "/srv/salt/ceph/configuration/files/ceph.conf.d/global.conf".
- Add the line: mon_max_pg_per_osd = 300
- From the command line to create the new ceph.conf file on the DeepSea node run:
:~ # salt 'admin_minion_only' state.apply ceph.configuration.create
- To distribute the updated ceph.conf file to all cluster nodes run:
:~ # salt '*' state.apply ceph.configuration
- Finally it is needed to do a rolling restart of the MON (monitors) and MGR (Manager) daemons for the new setting to be in effect.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7023728
  • Creation Date: 19-Feb-2019
  • Modified Date:03-Mar-2020
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.