SUSE Support

Here When You Need Us

Enabling shared/clustered disks within DomU's

This document (3447172) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise High Availability Extension 11 SP4
SUSE Linux Enterprise Server 10 Service Pack 2
SUSE Linux Enterprise Server 10 Service Pack 1
 

Situation

DomU configuration requires that mutliple DomU's have write access at the same time to the same physical disk. The following use cases may apply:
  • A shared clustered filesystem is needed between DomU's
  • DomU cluster nodes needs access to a physical disk

Resolution

The method described in this TID allows for a multiple DomU's to write to the same backend device at the same time. The default configuration only allows for a single DomU to have access to a physical device. Essentially

In this TID replace SLES10 with the name of the DomU guest.
 
 
NOTE: If you have done any of the following YOU MUST USE METHOD 2 OR METHOD 3
  • Used the virt-manager program or YaST to add any other virtual disks
  • Used the vrit-manager program or YaST to change CPU or memory allocations
  • Changed the disk allocation on the command line
  • Changed the CPU or memory allocations on the command line.

After the initial install, if any post configuration of the guest outside the DomU (i.e., done by commands run in Dom0) then the virtual machine must be configured via xm commands. If any of the above conditions are true, using Method 1 will produce undesirable effects. For these reasons, SUSE recommends using the Python files (Method 2 or Method 3).

method 1: Use /etc/xen/vm installation files
 
This method uses the files found in /etc/xen/vm. This method can only be done when the DomU has not had any post-configurations applied. This method is shown in many community examples and recommended by several other Linux vendors. For reasons of stability, SUSE recommends only using this method only when there have been no other changes to the DomU's hardware or when the hardware has been changed using this method.
  1. Go to /etc/xen/vm
  2. Locate the line reading
    disk = [...
  3. For each shared disk that you will be adding, append 'phy:/,,w!', before the closing , ] .
  4. For example, to add a LVM backed devce in the System VG, named test, to be the second virtual disk would be:
    'phy:/dev/system/test,xvdb,w!'
    A complete example would be
    disk = ['file:/var/lib/xen/images/test/test,xvda,w','phy:/dev/system/test,xvdb,w!', ]
  5. Save the file
  6. Import the new file
    xm new -f sles10
  7. Start the Xen VM
    xm start sles10
 
For each node or server, repeat steps 1 through 7.

 
 
method 2: use python configuration files
 
This method uses the Python to add the files. It is somewhat more complicated and recommended only when method 1 is insufficient.
  1. Export the Python configuration for the Xen DomU
    xm list -l SLES10 > sles10.py
  2. Save a backup of the file Inca's something goes wrong
  3. Open sles10.py for editing
  4. After the last device listing for a vbd device, add the following:
    (device
    (vbd
    (driver paravirtualised)
    (dev xvdb:disk)
    (uname phy:/dev/system/test)
    (mode 'w!')
    (type disk)
    (backend 0)
    )
    )
  5. Save and close the file
  6. Import the new file
    xm new -F sles10.py
  7. Start the Xen guest
    xm start sles10
Repeat Steps 1 through 7 for each guests.
 


method 3: hot plug the device
 
This method hotplugs the device and is arguably the easiest method, but a syntax error will result in having to use method 2. It allows for the device to be dynamically added to running DomU's. However, this method will only work for para-virtual DomU's.  This is recommended in cases where you cannot afford to have any down time and where you are comfortable at the command line.
 
Before begining this step, backup the running configuration
xm list -l sles10 > sles10.py
 
The following syntax is used:
xm block-attach
 
For example, to attach the physical device /dev/system/test as /dev/xvdb on the DomU:
xm block-attach SLES10 phy:/dev/system/test xvdb w!
 
 
If something goes wrong shutdown the DomU, and restore the Python configuration file (xm new -F sles10.py) and then restart the DomU and try again. Or switch to Method 2.

Additional Information

At the time of writing this TID, SUSE only supports OCFS2 as a shared or clustered file systems. By definition, having more than one server mount a file system at the same time is a clustered file system Mounting ReiserFS or EXT3 on multiple nodes can result in file corrupted data lose, and is not supported by SUSE.
 
 
additional links
 

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:3447172
  • Creation Date: 17-Dec-2007
  • Modified Date:24-Mar-2021
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.