SES6: ceph -s shows osd's rebalancing after osd marked out, after a cluster power failure.
This document (000019649) is provided subject to the disclaimer at the end of this document.
Environment
Situation
Cluster reports Health: HEALTH_OK
336 osds up/in.
One osd is out due to hardware issues.
All pg's are active+clean and no backfilling is occurring.
#==[ Command ]======================================#
# /usr/bin/ceph --connect-timeout=5 -s
cluster:
id: 0260f99a-117e-4c7e-8fbe-86c483bcd7e9
health: HEALTH_OK
services:
mon: 3 daemons, quorum mon01,mon02,mon03 (age 41h)
mgr: mon03(active, since 3d), standbys: mon02, mon01
mds: cephfs:1 {0=mds01=up:active} 1 up:standby
osd: 337 osds: 336 up (since 14m), 336 in (since 2d)
rgw: 2 daemons active (cephigw01, cephigw02)
data:
pools: 8 pools, 4328 pgs
objects: 607.39M objects, 608 TiB
usage: 1.1 PiB used, 1.3 PiB / 2.4 PiB avail
pgs: 4323 active+clean
5 active+clean+scrubbing+deep
io:
client: 216 MiB/s rd, 35 MiB/s wr, 418 op/s rd, 36 op/s wr
progress:
Rebalancing after osd.127 marked out
[..............................]
Rebalancing after osd.123 marked out
[====..........................]
Rebalancing after osd.104 marked out
[=.............................]
Rebalancing after osd.115 marked out
[=.............................]
Rebalancing after osd.109 marked out
[..............................]
Rebalancing after osd.122 marked out
[..............................]
Rebalancing after osd.121 marked out
[..............................]
Rebalancing after osd.99 marked out
[..............................]
Rebalancing after osd.116 marked out
[=.............................]
Rebalancing after osd.110 marked out
[..............................]
Rebalancing after osd.128 marked out
[..............................]
Customer is using:
ceph 14.2.5.382+g8881d33957-3.30.1
Resolution
systemctl restart ceph-mgr@mon03.servcie
The "rebalancing" messages are gone now.
#==[ Command ]======================================#
# /usr/bin/ceph --connect-timeout=5 -s
cluster:
id: 0260f99a-117e-4c7e-8fbe-86c483bcd7e9
health: HEALTH_OK
services:
mon: 3 daemons, quorum mon01,mon02,mon03 (age 41h)
mgr: mon03(active, since 3d), standbys: mon02, mon01
mds: cephfs:1 {0=mds01=up:active} 1 up:standby
osd: 337 osds: 336 up (since 14m), 336 in (since 2d)
rgw: 2 daemons active (cephigw01, cephigw02)
data:
pools: 8 pools, 4328 pgs
objects: 607.39M objects, 608 TiB
usage: 1.1 PiB used, 1.3 PiB / 2.4 PiB avail
pgs: 4323 active+clean
5 active+clean+scrubbing+deep
io:
client: 216 MiB/s rd, 35 MiB/s wr, 418 op/s rd, 36 op/s wr
Cause
Status
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000019649
- Creation Date: 18-Jun-2020
- Modified Date:18-Jun-2020
-
- SUSE Enterprise Storage
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com