Upgrade from SLES12-SP5 pacemaker cluster with clvmd+LVM setup to SLES15-SP6
This document (000021505) is provided subject to the disclaimer at the end of this document.
Environment
Situation
This results in being unable to start cluster services after the upgrade due to the missing clvmd binary.
Additional steps are required both pre- and post-upgrade.
This how-to provides a general idea for the transition from clvmd to lvmlockd.
In more complex setups, the steps should be adjusted to match existing configuration.
Resolution
primitive clvmd clvm \
op monitor timeout=90s interval=30s
primitive dlm ocf:pacemaker:controld \
op monitor interval=60 timeout=60
primitive rsc-LVM LVM \
params volgrpname=vg1 exclusive=true \
op monitor interval=120s timeout=120s
primitive rsc-fs1 Filesystem \
params device="/dev/vg1/lv1" directory="/fs1" fstype=xfs options="noatime,defaults" \
op monitor interval=30s timeout=60s
primitive rsc-fs2 Filesystem \
params device="/dev/vg1/lv2" directory="/fs2" fstype=xfs options="noatime,defaults" \
op monitor interval=30s timeout=60s
group g-lvm-fs rsc-LVM rsc-fs1 rsc-fs2
group g-storage dlm clvmd
clone cl-storage g-storage \
meta interleave=true ordered=true target-role=Started
order o-storage cl-storage g-lvm-fs
1. stop all resources accessing the clustered logical volumes
2. remove cluster flag:
vgchange -cn vg1
3. stop g-lvm-fs(reponsible for LV activation and mounting) and cl-storage(dlm+clvmd)
crm resource stop g-lvm-fs
crm resource stop cl-storage
4. remove clvmd and all LVM primitives
crm configure delete clvmd
crm configure delete rsc-LVM
5. stop and disable pacemaker on all nodes; choose any node and run
crm cluster run "systemctl disable --now pacemaker"
6. start the upgrade as described in our upgrade guide.
7. after successful upgrade of all nodes, update /etc/lvm/lvm.conf
In "global" section:
set: use_lvmlockd = 1
remove line: locking_type = 3
8. start pacemaker on all nodes
systemctl enable --now pacemaker
9. add new lvmlockd and LVM-activate primitives:
crm configure
primitive lvmlockd lvmlockd \
op start timeout="90" \
op stop timeout="100" \
op monitor interval="30" timeout="90"
primitive rsc-LVM LVM-activate \
params vgname=vg1 vg_access_mode=lvmlockd \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=30s timeout=90s
modgroup g-storage add lvmlockd
modgroup g-lvm-fs add rsc-LVM before rsc-fs1
commit
quit
10. start cl-storage(dlm+lvmlockd)
crm resource start cl-storage
11. step #9 starts dlm and lvmlockd, which makes it possible to set lock-type to dlm:
vgchange --lock-type dlm vg1
12. start g-lvm-fs
crm resource start g-lvm-fs
Cause
Additional Information
- https://www.suse.com/releasenotes/x86_64/SLE-HA/15-SP2/index.html#jsc-SLE-9163
- https://documentation.suse.com/sle-ha/15-SP6/single-html/SLE-HA-administration/#cha-ha-clvm
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000021505
- Creation Date: 24-Jul-2024
- Modified Date:22-Aug-2024
-
- SUSE Linux Enterprise High Availability Extension
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com