ceph-volume sets DB device to unavailable, additional OSDs can not use the DB device
This document (000019599) is provided subject to the disclaimer at the end of this document.
Environment
Situation
Resolution
1) Redeploy the whole drive group on this node. In the example case, remove the OSDs on sdg and sdf, then redeploy.
2) If option 1 is not practical (not enough space in the cluster for example) you can fall back to manually deploying the one OSD using "ceph-volume lvm create".
The procedure looks as follows:
Step 1: Identify the correct "ceph-block-dbs" journaling device with "ceph-volume inventory" and "ceph-volume lvm list". "lsblk" will also be informative.
In this example, /dev/sdg is the journaling device.
Use lvm tools: pvs/pvscan, vgs/vgscan, lvs/lvscan
ceph-osd01:~ # vgs VG #PV #LV #SN Attr VSize VFree ceph-block-1b9d15cb-0577-4e03-a588-e868272cc93c 1 1 0 wz--n- 39.00g 0 ceph-block-31cb174e-a6c8-4878-9559-303beca71ad4 1 1 0 wz--n- 39.00g 0 ceph-block-65ab1a0a-dbc1-4937-b546-2a6402b3a209 1 1 0 wz--n- 39.00g 0 ceph-block-9b812f5c-a44a-4e24-9608-a243e75e7e37 1 1 0 wz--n- 24.00g 0 ceph-block-a285e16b-3a51-4a1f-8b4b-70d1d675f7a3 1 1 0 wz--n- 24.00g 0 ceph-block-dbs-c9e13761-98e1-42b7-8154-cda011c856ec 1 3 0 wz--n- 19.00g 7.00g <<<===== /dev/sdg ceph-block-dbs-e05e8bdb-e62c-4315-9645-6077439afb23 1 3 0 wz--n- 19.00g 7.00g ceph-block-dc9b8edf-5aa6-4a5e-a525-49e3fd6c4d94 1 1 0 wz--n- 39.00g 0
Step 2: Create the new LV (Journaling partition). Note "--size 4G" should be the size of the desired Journaling device. ie 62G, etc...
lvcreate -n osd-block-db-$(cat /proc/sys/kernel/random/uuid) ceph-block-dbs-c9e13761-98e1-42b7-8154-cda011c856ec --size 4G Logical volume "osd-block-db-8310cbee-1ec3-4107-a4d9-460024541ea9" created.Verify with "lvs".
Step 3: Verify "/dev/sdi" is a physical volume "PV"
Use "pvs" or "pvscan" to validate, if the device is not listed, in this case "/dev/sdi",
if not listed, run: "pvcreate /dev/sdi"
Again, verify with "pvs"or "pvscan"
Step 4: Now use "ceph-volume" to create the missing OSD:
From "ceph-volume inventory" I see "/dev/sdi" as the available device for an osd.
From the command and output in step 2, deploy the new osd :
ceph-osd01:~ # ceph-volume lvm create --data /dev/sdi --block.db ceph-block-dbs-c9e13761-98e1-42b7-8154-cda011c856ec/osd-block-db-8310cbee-1ec3-4107-a4d9-460024541ea9 ---[cut here]--- --> ceph-volume lvm activate successful for osd ID: 30 --> ceph-volume lvm create successful for: /dev/sdi
Verify with:
mount | grep ceph ceph-volume inventory ceph-volume lvm list ceph -s
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000019599
- Creation Date: 01-Apr-2020
- Modified Date:23-Oct-2020
-
- SUSE Enterprise Storage
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com