After reboot some OSDs are reported as being down.
This document (7021460) is provided subject to the disclaimer at the end of this document.
Environment
Situation
-2 1.33096 host ses-node-X
1 0.21329 osd.1 down 1.00000 1.00000
4 0.21329 osd.4 up 1.00000 1.00000
7 0.90439 osd.7 down 1.00000 1.00000
Resolution
ceph-disk activate-all
Cause
Additional Information
:~ # journalctl -u ceph-disk@dev-sdi2.service
-- Logs begin at di 2017-07-11 14:06:49 CEST, end at wo 2017-07-19
21:17:45 CEST. --
jul 11 14:07:09 sesnode-4 systemd[1]: Stopped Ceph disk activation: /dev/sdi2.
jul 11 14:07:09 sesnode-4 systemd[1]: Starting Ceph disk activation: /dev/sdi2...
...
jul 11 14:09:10 sesnode-4 systemd[1]: ceph-disk@dev-sdi2.service: Main process exited, code=exited, status=124/n/a
jul 11 14:09:10 sesnode-4 systemd[1]: Failed to start Ceph disk activation: /dev/sdi2.
jul 11 14:09:10 sesnode-4 systemd[1]: ceph-disk@dev-sdi2.service: Unit entered failed state.
jul 11 14:09:10 sesnode-4 systemd[1]: ceph-disk@dev-sdi2.service: Failed with result 'exit-code'.
The "code=exited, status=124" in the above output indicates the ceph-disk timeout was reached.
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:7021460
- Creation Date: 18-Sep-2017
- Modified Date:03-Mar-2020
-
- SUSE Enterprise Storage
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com