SUSE Support

Here When You Need Us

HEALTH_WARN 2 stray host(s) with 2 daemon(s) not managed by cephadm

This document (000019915) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Enterprise Storage 7

Situation

SES7: HEALTH_WARN 2 stray host(s) with 2 daemon(s) not managed by cephadm

In this case the daemons are Mon daemons. If the daemons are moved to ceph4 or ceph5, then the cluster is healthy.  It appears that when the mon daemon were deployed on ceph1 and ceph2, they are deployed as short host name and not fqdn.  

hcpoceph71:~ # ceph health detail
HEALTH_WARN 2 stray host(s) with 2 daemon(s) not managed by cephadm
[WRN] CEPHADM_STRAY_HOST: 2 stray host(s) with 2 daemon(s) not managed by cephadm
    stray host ceph1 has 1 stray daemons: ['mon.ceph1']
    stray host ceph2 has 1 stray daemons: ['mon.ceph2']
ceph1:~ # ceph -s
  cluster:
    id:     90122986-8059-11eb-ae6c-3868dd37f020
    health: HEALTH_WARN
            2 stray host(s) with 2 daemon(s) not managed by cephadm

  services:
    mon: 3 daemons, quorum ceph3,ceph2,ceph1 (age 23h)
    mgr: ceph3.gzgvmf(active, since 3d), standbys: ceph2.example.com.zvnopr, ceph1.kxzkxs
    mds: CEPHFS:1 {0=CEPHFS.ceph3.dqkmtv=up:active} 2 up:standby
    osd: 30 osds: 30 up (since 23h), 30 in (since 3d)
    rgw: 4 daemons active (RGW_REALM.RGW_ZONE.ceph1.ydfuzm, RGW_REALM.RGW_ZONE.ceph3.jslbsd, RGW_REALM.RGW_ZONE.ceph4.ztcyln, RGW_REALM.RGW_ZONE.ceph5.wqakuf)

  task status:

  data:
    pools:   10 pools, 265 pgs
    objects: 296 objects, 6.3 MiB
    usage:   31 GiB used, 384 TiB / 384 TiB avail
    pgs:     265 active+clean

  io:
    client:   1.7 KiB/s rd, 1 op/s rd, 0 op/s wr


https://docs.ceph.com/en/latest/cephadm/host-management/#fully-qualified-domain-names-vs-bare-host-names

ceph1:~ # ceph orch host ls
HOST                 ADDR                 LABELS  STATUS
ceph1.example.com  ceph1.example.com
ceph2.example.com  ceph2.example.com
ceph3.example.com  ceph3.example.com
ceph4.example.com  ceph4.example.com
ceph5.example.com  ceph5.example.com


'hostname' returns short names for ceph1 and ceph2. 

ceph1:~ # salt '*' cmd.shell 'hostname'
ceph1.example.com:
ceph1
ceph4.example.com:
ceph4.example.com
ceph3.example.com:
ceph3.example.com
ceph2.example.com:
ceph2
ceph5.example.com:
ceph5.example.com


ceph1:~ # salt '*' cmd.shell 'hostname -f'
ceph3.example.com:
ceph3.example.com
ceph4.example.com:
ceph4.example.com
ceph2.example.com:
ceph2.example.com
ceph5.example.com:
ceph5.example.com
ceph1.example.com:
ceph1.example.com


Output from "salt '*' cmd.shell "cat /etc/hostname"" will be consistent with the other data.

Change the hostname with hostnamectl
Example:
ceph1:~ # hostnamectl set-hostname ceph1.example.com
ceph1:~ # cat /etc/hostname
ceph1.example.com


ceph1:~ # hostnamectl set-hostname ceph1
ceph1:~ # cat /etc/hostname
ceph1

Resolution

Cause

Host names are inconsistent. 

Status

Top Issue

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000019915
  • Creation Date: 16-Mar-2021
  • Modified Date:12-Apr-2023
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.