Getting stale NFS file handle errors after cluster fail over
This document (3714483) is provided subject to the disclaimer at the end of this document.
Environment
Situation
Error: "Cannot open: Stale NFS file handle"
An NFS resource in a High Availability (HA) cluster fails after it has been migrated to another node
m20:~ # crm_mon -1
============
Last updated: Thu Sep 21 13:04:03 2006
Current DC: m20 (f56c650f-1047-453c-907c-859e9c6cb598)
2 Nodes configured.
3 Resources configured.
============
Node: m20 (f56c650f-1047-453c-907c-859e9c6cb598): online
Node: m12 (323e7b1b-1545-4e26-a9a5-5f8e65871752): online
Resource Group: NFS
m55_ip (heartbeat::ocf:IPaddr): Started m20
nfsvg_lvm (heartbeat::ocf:LVM): Started m20
nfs_share_fs (heartbeat::ocf:Filesystem): Started m20
nfsserver (lsb:nfsserver): Started m20
Resolution
/exports/data *(rw,root_squash,sync)
to this:
/exports/data *(rw,root_squash,sync,fsid=25)
The /etc/exports file should be the same on each node hosting the NFS resource.
Additional Information
WARNING: You need to make sure the fsid number is unique across all exported file systems.
For additional information, see:
http://linux-ha.org/HaNFS
exports(5)
Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:3714483
- Creation Date: 21-Sep-2006
- Modified Date:10-Mar-2021
-
- SUSE Linux Enterprise Server
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com