SUSE Support

Here When You Need Us

Node restart results in default CRUSH map again.

This document (7021090) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Enterprise Storage 4

Situation

After customizing the CRUSH map, the command "ceph osd tree" shows the customized CRUSH map correctly, however after a node reboot the default CRUSH map is shown again.

Resolution

Add the following line to the default section of the "/etc/ceph/ceph.conf" file on the cluster nodes:
osd crush update on start = false
Also see the online SUSE Enterprise Storage documentation for details on the above setting and CRUSH map configuration.

Cause

When an OSD starts, the "ceph-osd-prestart.sh" script updates the OSD's location in the CRUSH map, unless "osd crush update on start = false" is set in "/etc/ceph/ceph.conf".

Additional Information

To permanently add the setting using DeepSea, take the following steps from the cluster admin node:
- Edit "/srv/salt/ceph/configuration/files/ceph.conf.j2" and add the line to the bottom of the [global] section for example:
[global]
fsid = {{ salt['pillar.get']('fsid') }}
mon_initial_members = {{ salt['pillar.get']('mon_initial_members') | join(', ') }}
mon_host = {{ salt['pillar.get']('mon_host') | join(', ') }}
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = {{ salt['pillar.get']('public_network') }}
cluster_network = {{ salt['pillar.get']('cluster_network') }}
osd crush update on start = false

{% for config in salt['rgw.configurations']() %}
{% set client = config + "." + grains['host'] %}
{% include "ceph/configuration/files/ceph.conf." + config %}
{% endfor %}

- Re-run salt stage 3:
salt-run state.orch ceph.stage.3
- Verify the "/etc/ceph/ceph.conf" file was successfully updated on all the minions by running for example something like:
salt "*" cmd.run "grep -i 'crush update' /etc/ceph/ceph.conf"
The above command should return something similar to the following:
ses-node-XX.dns_name:
    osd crush update on start = false
ses-node-XX.dns_name:
    osd crush update on start = false
ses-node-XX.dns_name:
    osd crush update on start = false
ses-node-XX.dns_name:
    osd crush update on start = false
ses-node-XX.dns_name:
    osd crush update on start = false

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7021090
  • Creation Date: 13-Jul-2017
  • Modified Date:03-Mar-2020
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.