Multipathing on SLES 10
Content
1. Introduction
2. Environment
2.1 Connection via fibrechannel
2.2 Connection via iSCSI
2.3 Operating System Specifics
3. Installation Scenarios
3.1 Multipathed system root and boot Partition on SAN device
3.2 Multipathed LVM root and boot Partition on SAN device
3.3 Multipath setup via autoyast installation
1. Introduction
Multipath is a commonly used practice for introducing failure tolerance in the systems I/O stack.
The idea is, to have redundant paths that connect the local system to an external storage. In case that one path fails, the I/O continues over the leftover paths.
This avoids system downtime due to failing I/O.
2. Environment
On Linux, there are 2 commonly used methods for connecting to the external storage.
2.1. Connection via fibrechannel
The local system contains a fibrechannel host-bus-adapter (HBA) that is connected to a remote fibrechannel storage. Apart from the direct connection to the storage, it is also possible to use fibrechannel switches.
Fibrechannel HBA have their own Bios which allows to connect to the SAN even before the bootloader loads. That way, the bootloader can be started from the remote storage without the need of any local storage device (still you need to be able to select the HBA as boot device in the machines bios).
A path is defined as the complete route from host to storage, including the HBA, cabling and possibly a switch. A multipath solution requires that there is more than one of these paths.
The number of paths is easy to see if direct connections are made between a local machine and a storage (it is simply the number of physical cables), but less obvious if a switch is used. If, for example, there are 3 HBAs connected to a switch, and the switch is connected to both ports of a storage, this results in a total of 6 paths.
2.2. Connection via iSCSI
The local system establishes a tcp/ip connection to a remote iSCSI server. This can be done either via a normal network interface card or via a dedicated iSCSI adapter card.
Each connection to an iSCSI server is seen as one path on the host.
The usage of the network interface as iSCSI adapter does not affect its normal function as network device. It is still possible to assign an IP address and connect e.g. via ssh to it.
In case of a dedicated iSCSI adapter, it is also possible to connect to the iSCSI server before the bootloader loads. This allows a remote boot, just like when using a fibrechannel HBA.
2.3 Operating System Specifics
All paths to the storage are seen by the Linux kernel as separate harddisks and get separate Linux device names.
The multipath functionality on SLES is based on virtual devices that are created by the device mapper. Under these devices, all paths to a particular storage device are assembled.
The system detects paths that belong to the same storage device by unique ids that get assigned to each storage device via different methods (e.g. disk serial number, file system unique id, sysfs path etc.)
It then creates a symlink for each unique id below /dev/disk/. Without multipathing, this link points to the last path to the device, e.g.:
ls -l /dev/disk/by-id/ lrwxrwxrwx 1 root root 9 2008-03-20 11:15 scsi-36006048000028350131253594d303145 -> ../../sdg lrwxrwxrwx 1 root root 10 2008-03-20 11:15 scsi-36006048000028350131253594d303145-part1 -> ../../sdg1 lrwxrwxrwx 1 root root 10 2008-03-20 11:15 scsi-36006048000028350131253594d303145-part2 -> ../../sdg2
If multipath is activated, this link will point to the multipath device, created by the device mapper, e.g.:
ls -l /dev/disk/by-id/ lrwxrwxrwx 1 root root 11 2008-03-20 12:05 scsi-36006048000028350131253594d303145 -> ../../dm-22 lrwxrwxrwx 1 root root 11 2008-03-20 12:05 scsi-36006048000028350131253594d303145-part1 -> ../../dm-45 lrwxrwxrwx 1 root root 11 2008-03-20 12:05 scsi-36006048000028350131253594d303145-part2 -> ../../dm-50
3. Installation Scenarios
3.1. Multipathed system root and boot Partition
The machine is booting from a multipathed LUN from the SAN that contains a /boot partition and the System root /.
- Configure the LUN that should boot the machine in the HBAs bios.The particular steps depend on the hardware. Refer to the Vendors documentation for details.
Write down the node name of the remote fc Port and the LUN Number of the boot LUN, e.g.:
0x500604843978c00f 6
(this is the same like under /sys/class/fc_remote_ports/rport-X:X-X/node_name)
In the example above, the 7th LUN of the storage is used for the local machine.
You can simplify the search for the correct lun if you configure your storage to make only the LUN, that is designated for installation, visible to the Linux machine.
- Find the Linux device name for the boot LUNStart the installation system.
When the YaST window is shown, switch to a command shell (e.g. via Ctrl+Alt+F4).
Find the wwid of the fc remote port and the LUN number in the path of the /dev/disk/by-path devices, e.g.:
ll /dev/disk/by-path | grep 0x500604843978c00f | grep 0x0006
The output should contain the Linux device of all paths to the LUN, e.g.:
######################################################################### lrwxrwxrwx 1 root root 10 Apr 3 15:31 pci-0000:08:0b.0-fc-0x500604843978c00f:0x0006000000000000 -> ../../sdau lrwxrwxrwx 1 root root 10 Apr 3 15:31 pci-0000:08:0c.0-fc-0x500604843978c00f:0x0006000000000000 -> ../../sddw #########################################################################
- Setup custom partitioningGo back to the Installer and continue the Installation until you come to the:
Installation Settings menu.
Click there on:
Partitioning > Create Custom Partitioning Setup > Custom Partitioning (for Experts)
3.1. Create a bootloader partition
Find your boot LUN id under the column: “Device Path” (this is similar to the by-path entry).
You will see the same id for all paths to the boot LUN.
Just choose one of them.
Create a partition on the selected device, let it format as ext2 and assign the mount point /boot to it.
############################ Warning: The new partitioning of the LUN might not get synced over all paths before a reboot. Do not touch the other paths until reboot! ############################
3.2. Create a swap partition
Create another partition on the boot LUN, choose the file system id: “0x82 Linux swap”, let it format as swap and choose the mountpoint swap.
3.3. Create the root partition
Create a third partition on the boot LUN, let it format as reiserfs and assign the mount point /.
Then click on: Finish to confirm the partitioning.
Your partitioning scheme should now look similar to this:
###################################################################################### Partitioning Create boot partition /dev/sdau1 (101.9 MB) with ext2 Create swap partition /dev/sdau2 (4.0 GB) Create root partition /dev/sdau3 (10.0 GB) with reiserfs ######################################################################################
- Correct the grub installationIf you get an error message during the bootloader installation, you need to setup grub manually.
In order to do so, switch again to the command shell of the installation system and execute the commands:
chroot /mnt/
Then, edit the file:
/boot/grub/device.map
and remove all entries except the device name of your boot LUN (the path on which you have installed your partitions). Set this device to (hd0), e.g.:
(hd0) /dev/sdau
Change /etc/grub.conf to point to this device and the bootloader partition on it e.g.:
setup --stage2=/boot/grub/stage2 (hd0) (hd0,0) quit
save the file and run:
grub </etc/grub.conf
Now edit the file:
/boot/grub/menu.lst
and change the device entries behind the root line hd0, e.g.:
root (hd0,0)
You also need to change the gfxmenu entry, e.g.:
gfxmenu (hd0,0)/message
- Create new initrdThe above grub error might prevent the creation of an initrd.
Hence you need to create it manually.
In order to do so, execute the command:
mkinitrd
The output should look similar to this:
############################################################# 149:/ # mkinitrd Root device: /dev/sdau3 (mounted on / as reiserfs) Module list: qla4xxx amd74xx cciss qla2xxx processor thermal fan reiserfs edd (xennet xenblk) Kernel image: /boot/vmlinuz-2.6.16.46-0.12-smp Initrd image: /boot/initrd-2.6.16.46-0.12-smp Shared libs: lib64/ld-2.4.so lib64/libacl.so.1.1.0 lib64/libattr.so.1.1.0 lib64/libc-2.4.so lib64/libdl-2.4.so lib64/libhistory.so.5.1 lib64/libncurses.so.5.5 lib64/libpthread-2.4.so lib64/libreadline.so.5.1 lib64/librt-2.4.so lib64/libuuid.so.1.2 lib64/libnss_files-2.4.so lib64/libnss_files.so.2 lib64/libgcc_s.so.1 Driver modules: ide-core ide-disk scsi_mod sd_mod qla4xxx amd74xx cciss scsi_transport_fc firmware_class qla2xxx processor thermal fan edd Filesystem modules: reiserfs Including: initramfs fsck.reiserfs Bootsplash: SuSE-SLES (1024x768) 16093 blocks #############################################################
5.1. Setup multipath boot (Optional)
It is also possible to already include multipath support in this stage.
5.1.1 Move the mountpoints to the multipath devices
In order to do so, you first need edit /etc/fstab and put the mount point of system root from the Linux device (/dev/sdau) to the /dev/disk/by-id/ devices, e.g. change:
/dev/sdau3 / reiserfs acl,user_xattr 1 1 /dev/sdau1 /boot ext2 acl,user_xattr 1 2 /dev/sdau2 swap swap defaults 0 0
into:
/dev/disk/by-id/scsi-36006048000028350131253594d303145-part3 / reiserfs acl,user_xattr 1 1 /dev/disk/by-id/scsi-36006048000028350131253594d303145-part1 /boot ext2 acl,user_xattr 1 2 /dev/disk/by-id/scsi-36006048000028350131253594d303145-part2 swap swap defaults 0 0
5.1.2 Create a multipath enabled initrd
Afterwards, you need to create a multipath enabled initrd via the command:
mkinitrd -f "mpath"
the output should look like this:
############################################################# 149:/ # mkinitrd -f "mpath" Root device: /dev/disk/by-id/scsi-36006048000028350131253594d303145-part3 (/dev/sdau3) (mounted on / as reiserfs) Module list: qla4xxx amd74xx cciss qla2xxx processor thermal fan reiserfs edd (xennet xenblk) Kernel image: /boot/vmlinuz-2.6.16.46-0.12-smp Initrd image: /boot/initrd-2.6.16.46-0.12-smp Shared libs: lib64/ld-2.4.so lib64/libacl.so.1.1.0 lib64/libattr.so.1.1.0 lib64/libc-2.4.so lib64/libdevmapper.so.1.02 lib64/libdl-2.4.so lib64/libhistory.so.5.1 lib64/libncurses.so.5.5 lib64/libpthread-2.4.so lib64/libreadline.so.5.1 lib64/librt-2.4.so lib64/libsysfs.so.1.0.3 lib64/libuuid.so.1.2 lib64/libnss_files-2.4.so lib64/libnss_files.so.2 lib64/libgcc_s.so.1 Driver modules: ide-core ide-disk scsi_mod sd_mod qla4xxx amd74xx cciss scsi_transport_fc firmware_class qla2xxx processor thermal fan edd dm-mod dm-multipath dm-round-robin dm-emc Filesystem modules: reiserfs Including: initramfs dm/mpath fsck.reiserfs Bootsplash: SuSE-SLES (1024x768) 17297 blocks ##############################################################
Attention:
Before creating a multipath enabled initrd, please check wheter you have configured user_friendly_names in /etc/multipath.conf. For details, please read support TID 7001133 (Restrictions for the usage of user_friendly_names in multipath configurations).5.1.3 Enable multipathing in the bootloader
You also need to modify the root boot parameter in /boot/grub/menu.lst to point to the multipath devices so that the kernel row looks like this:
kernel /vmlinuz-2.6.16.46-0.12-smp root=/dev/disk/by-id/scsi-36006048000028350131253594d303145-part3 console=tty0 console=ttyS0,115200 resume=/dev/disk/by-id/scsi-36006048000028350131253594d303145-part2 splash=silent showopts
- Finish installation After this, you can leave the chroot with the command: exit
Then, switch back to the installer screen and confirm the grub error message.
Click on “No” when asked to retry the bootloader installation.
The machine will reboot then.
After the reboot you can follow the usual installation process until the installation is finished.
++++++++++++++++++++++Post Installation+++++++++++++++++++++
- Set up multipathing7.1 Activate the multipath init scripts:
insserv boot.multipath insserv multipathd
7.2 Setup multipath mount points in /etc/fstab
If you have not done so on the first installation stage you need to modify the mountpoints in /etc/fstab as described in 5.1.1.
7.3 Create a multipath enabled initrd
If not already done during first installation stage you need to create a multipath enabled initrd now by executing:
mkinitrd -f "mpath"
as decribed in 5.1.2.
7.4 Enable multipathing in the bootloader configuration
- Modify /boot/grub/menu.lst as described in 5.1.3.
- Adapt the multipath configuration file to your SAN.There is a number of Storages that are already automatically supported by the multipath tools.
Check whether you can find your Storage in the output of:
multipath -t
If you have a Storage that is not already automatically supported by the multipath tools, you need to create a configuration for it.
In order to do so, copy the template from the package documentation, e.g.:
cp /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic /etc/multipath.conf
and edit it to your needs.
Most of the time you only need to enter a device section.
The details of the setup depend highly on the hardware and cannot be given as general recommendation.
Please refer to your hardware vendors documentation in order to find them out.
For a IBM 1814 FaStT, it could look like this:
devices { device { vendor "IBM" product "1722-600" path_grouping_policy group_by_serial path_checker tur path_selector "round-robin 0" prio_callout "/sbin/mpath_prio_tpc /dev/%n" failback immediate features "1 queue_if_no_path" no_path_retry 300 } }
- Reboot the machineAfter reboot, your system is set up with all partitions on multipah.
For updating the bootloader or modify the partitioning of the boot LUN, you need to deactivate multipath.
You can do this by rebooting the machine and adding the boot option:
multipath=off
to the kernel command line.
3.2 Multipathed LVM root and boot Partition on SAN device
The machine is booting from a multipathed LUN from the SAN that contains a /boot partition and a lvm physical volume. This physical volume is part of a volume group that contains the system root in a logical volume.
- Configure the LUN that should boot the machine in the HBAs bios.The particular steps depend on the hardware. Refer to the Vendors documentation for details.
Write down the node name of the remote fc Port and the LUN Number of the boot LUN, e.g.:
0x500604843978c00f 6
(this is the same like under /sys/class/fc_remote_ports/rport-X:X-X/node_name)
In the example above, the 7th LUN of the storage is used for the local machine.
You can simplify the search for the correct lun if you configure your storage to make only the LUN, that is designated for installation, visible to the Linux machine.
- Find the Linux device name for the boot LUNStart the installation system.
When the YaST window is shown, switch to a command shell (e.g. via Ctrl+Alt+F4).
Find the wwid of the fc remote port and the LUN number in the path of the /dev/disk/by-path devices, e.g.:
ll /dev/disk/by-path | grep 0x500604843978c00f | grep 0x0006
The output should contain the Linux device of all paths to the LUN, e.g.:
######################################################################### lrwxrwxrwx 1 root root 10 Apr 3 15:31 pci-0000:08:0b.0-fc-0x500604843978c00f:0x0006000000000000 -> ../../sdau lrwxrwxrwx 1 root root 10 Apr 3 15:31 pci-0000:08:0c.0-fc-0x500604843978c00f:0x0006000000000000 -> ../../sddw #########################################################################
- Setup custom partitioningGo back to the Installer and continue the Installation until you come to the:
Installation Settings menu.
Click there on:
Partitioning > Create Custom Partitioning Setup > Custom Partitioning (for Experts)
3.1. Create a bootloader partition
Find your boot LUN id under the column: “Device Path” (this is similar to the by-path entry).
You will see the same id for all paths to the boot LUN.
Just choose one of them.
Create a partition on the selected device, let it format as ext2 and assign the mount point /boot to it.
############################ Warning: The new partitioning of the LUN might not get synced over all paths before a reboot. Do not touch the other paths until reboot! ############################
3.2. Create the lvm setup
Create another partition on the boot LUN and change the File System ID to:
0x8E Linux LVM
and confirm with OK.
Go into the lvm setup menu by clicking on the LVM button
Create a new volume group, e.g. rootvg and add it to the pysical volume(s) you have designated for lvm.
Go to the next setup step via clicking on: Next and select the new volume group rootvg in the upper left dropdown list.
Now click on: Add and configure the first logical volume, e.g. use it for swap.
Create another logical volume for the system root.
Finish the lvm setup by clicking on: Next.
Then click on: Finish to confirm the partitioning.
Your partitioning scheme should now look similar to this:
###################################################################################### Partitioning Create boot partition /dev/sdau1 (101.9 MB) with ext2 Create partition /dev/sdau2 (42.0 GB) with id=8E Create volume group rootvg from /dev/sdau2 Create logical volume /dev/rootvg/lvroot (9.5 GB) for / with reiserfs Create swap logical volume /dev/rootvg/lvswap (4.0 GB) ######################################################################################
- Correct the grub installationIf you get an error message during the bootloader installation, you need to setup grub manually.
In order to do so, switch again to the command shell of the installation system and execute the commands:
chroot /mnt/
Then, edit the file:
/boot/grub/device.map
and remove all entries except the device name of your boot LUN (the path on which you have installed your lvm setup). Set this device to (hd0), e.g.:
(hd0) /dev/sdau
Change /etc/grub.conf to point to this device and the bootloader partition on it e.g.:
setup --stage2=/boot/grub/stage2 (hd0) (hd0,0) quit
save the file and run:
grub </etc/grub.conf
Now edit the file:
/boot/grub/menu.lst
and change the device entries behind the root line hd0, e.g.:
root (hd0,0)
You also need to change the gfxmenu entry, e.g.:
gfxmenu (hd0,0)/message
- Configure lvm scan filterSince lvm is not multipath aware, the default lvm scan filter will scan for the lvm signature over all paths.
This can lead to device access race conditions and even to filesystem corruption.
Hence you need to modify the lvm scan filter to only scan the path to the multipath devices.
In order to do so, edit the file /etc/lvm/lvm.conf and put this device name into the scan filter:
# filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "a/.*/" ] filter = [ "a|/dev/.*/by-id/.*|", "r/.*/" ]
Afterwards, the pvscan output should only show the by-id devices, e.g.:
############################################################### 149:/ # pvscan PV /dev/disk/by-id/edd-int13_dev80-part2 VG rootvg lvm2 [42.04 GB / 28.53 GB free] PV /dev/disk/by-id/scsi-36006048000028350131253594d303134-part1 VG macallanvg lvm2 [42.14 GB / 140.00 MB free] PV /dev/disk/by-id/scsi-36006048000028350131253594d303030-part1 VG sapvg1 lvm2 [42.14 GB / 0 free] PV /dev/disk/by-id/scsi-36006048000028350131253594d303035-part1 VG sapvg1 lvm2 [42.14 GB / 0 free] PV /dev/disk/by-id/scsi-36006048000028350131253594d303041-part1 VG sapvg1 lvm2 [42.14 GB / 0 free] PV /dev/disk/by-id/scsi-36006048000028350131253594d303046-part1 VG sapvg1 lvm2 [42.14 GB / 48.00 MB free] Total: 6 [252.72 GB] / in use: 6 [252.72 GB] / in no VG: 0 [0 ] ###############################################################
In order to get this included into the initrd you need to do this BEFORE creating a new initrd.
- Create a lvm enabled initrdThe above grub error might prevent the creation of an initrd.
Hence you need to create it manually including lvm support.
In order to do so, execute the command:
mkinitrd -f "lvm2"
The output should look similar to this:
############################################################# 149:/ # mkinitrd -f "lvm2" Root device: /dev/rootvg/lvroot (mounted on / as reiserfs) Module list: qla4xxx amd74xx cciss qla2xxx processor thermal fan reiserfs dm_mod edd dm-mod dm-snapshot (xennet xenblk dm-mod dm-snapshot) Kernel image: /boot/vmlinuz-2.6.16.46-0.12-smp Initrd image: /boot/initrd-2.6.16.46-0.12-smp Shared libs: lib64/ld-2.4.so lib64/libacl.so.1.1.0 lib64/libattr.so.1.1.0 lib64/libc-2.4.so lib64/libdevmapper.so.1.02 lib64/libdl-2.4.so lib64/libhistory.so.5.1 lib64/libncurses.so.5.5 lib64/libpthread-2.4.so lib64/libreadline.so.5.1 lib64/librt-2.4.so lib64/libuuid.so.1.2 lib64/libnss_files-2.4.so lib64/libnss_files.so.2 lib64/libgcc_s.so.1 Driver modules: ide-core ide-disk scsi_mod sd_mod qla4xxx amd74xx cciss scsi_transport_fc firmware_class qla2xxx processor thermal fan dm-mod edd dm-snapshot dm-crypt dm-zero dm-mirror Filesystem modules: reiserfs Including: initramfs dm/lvm2 fsck.reiserfs Bootsplash: SuSE-SLES (1024x768) #############################################################
It is also possible to already include multipath support in this stage. In order to do so, just create an initrd as described in 8.3.
You can skip this step then during post installation.
During my tests I’ve found on one machine the problem, that evms was included in the initrd, even though I’ve never specified it as feature.
In this case the initial boot would fail.
In order to prevent this it is best to move evms out of the way by executing:
mv /sbin/evms /sbin/evms.moved
before executing the mkinitrd command.
- Finish installationAfter this, you can leave the chroot with the command: exit
Then, switch back to the installer screen and confirm the grub error message.
Click on “No” when asked to retry the bootloader installation.
The machine will reboot.
After the reboot you can follow the usual installation process until the installation is finished.
++++++++++++++++++++++Post Installation+++++++++++++++++++++
- Set up multipathing8.1 Activate the multipath init scripts:
insserv boot.multipath insserv multipathd
8.2 Check in /etc/fstab that /boot points to the multipath device, e.g.:
Edit /etc/fstab and change the /boot mount from the physical device to the multipath device, e.g.:
change:
/dev/sdau1 /boot ext2 acl,user_xattr 1 2
into:
/dev/disk/by-id/scsi-36006048000028350131253594d303145-part1 /boot ext2 acl,user_xattr 1 2
8.3 Create a multipath enabled initrd
Since the multipath device must be there before the root device is mounted, the multipath configuration must be put into the initrd.
In order to do so you need to create a new initrd using the command:
mkinitrd -f "lvm2 mpath"
- Adapt the multipath configuration file to your SAN.There is a number of Storages that are already automatically supported by the multipath tools.
Check whether you can find your Storage in the output of:
multipath -t
If you have a Storage that is not already automatically supported by the multipath tools, you need to create a configuration for it.
In order to do so, copy the template from the package documentation, e.g.:
cp /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic /etc/multipath.conf
and edit it to your needs.
Most of the time you only need to enter a device section.
The details of the setup depend highly on the hardware and cannot be given as general recommendation.
Please refer to your hardware vendors documentation in order to find them out.
- Reboot the machineAfter reboot, your system is set up with all partitions on multipath.
For updating the bootloader or creating a new lvm setup on a LUN, you need to deactivate multipath.
You can do this by rebooting the machine and adding the boot option:
multipath=off
to the kernel command line.
3.3. Multipath setup via autoyast installation
It is also possible to set up multipathing via an autoyast installation. This, however, requires some advanced autoyast scripting and customization.
Many thanks to the people from PSA Peugeot Citroen Bessancourt for providing their sophisticated setup as an example.
The setup includes the installation on evms logical volumes that are created on top of the multipath devices.
Please note that the given example is specific to the test machine used. You need to adapt some settings to the specific of your machine before starting to experiment with it.
These are in Detail:
- The name of the xml file called after the machines MAC address
- The name of the xml file called after the machines dns domain
- The name of the xml file called after the machine type
Inside the xml files named after the machines MAC address and the machines dns domain you need to adapt the location of the custom user scripts, e.g. you need to modify the tag:
<location>nfs://installserver.lab.suse.de/MULTIPATH/postinstall.script</location>
and the network section to match your machines properties.
Finally you need to adapt the autoyast parameter when booting the installation system to point at the location where your autoyast xml files are stored, e.g. you need to modify:
autoyast=nfs://169.144.177.45/MULTIPATH/
You might also want to change the root password in lab.suse.de.xml by modifying the tag:
<user_password>$2a$10$IQjH8vPMLnRDtDr/LrZGO./ZqWkFgGbW5vPfHbNLVPcED08Mq36SG</user_password>
below the user configuration for the root user.
In the example, the password is: “secret”.
In order to change this, you need to enter a hash for the new password.
You can do this by just modifying the password on another SLES machine and then copy the password hash from: /etc/shadow, e.g. for root, take the hash between the colons from the row:
root:$2a$10$MwtqbmBzu/JFS1RWlrjOlOCvFx3.dx6kLUXxH4zPvtqsBKtMZlpJu:14273::::::
During the tests, it happened that the local harddisks confused the autoyast installation. Hence it is recommended to disable the local harddisks before starting the installation.
Just like with the manual installation, the autoyast workflow aims to install the system (or in this case the evms containers) on one path of the multipath device. After the machine reboots, the system is automatically visible on the multipath device.
The other paths of the multipath device, which are not used for installation, are deleted beforehand. This is necessary in order to avoid confusion of the installer.
The setup is based on a autoyast rules file that select the system setup from a set of hardware and network properties. These are in detail:
MAC Address
Architecture
Machines Product Name
Domain Name
Depending on these properties an autoyast configuration is assembled from several xml files, each one specifying some aspects of the system to be installed.
The file 00118513ab55.xml ist called after the MAC address of the machines network card. It contains setup, specific to this particular machine, e.g. partitioning, network setup, and a call to the custom user script executed in the second stage of the installation.
The file ProLiant_DL585_G1.xml contains setup specific to this machine type, e.g. installation and configuration of hp servermanagement packages.
The file lab.suse.de.xml is named after the dns domain where this machine is in and contains the general system configuration setup, e.g. the bootloader, network services, the firewall and other configuration files in /etc.
In addition it contains a call to the custom users scripts executed in the first stage of the installation and the routine for the listbox that selects the installation LUN.
The file software-x86_64.xml contains the Software Selection that should be installed on this machine.
Some additional configuration is done in 3 custom user scripts.
There is one pre-installation script (preinstall.script) that is executed at the beginning of the YaST installation process, e.g. after the hardware detection and before the harddisk partitioning.
This script is responsible for deleting the redundant paths.
The second script (chroot.script) is executed after the package installation and before the system reboots. Before this script is executed, the installer chroots into the partition where the system has been installed.
This script modifies the /boot partition to be available as an evms device. In addition it does some modifications to the bootloader installation.
The third script (postinstall.script) is executed after the system has rebooted. This script is responsible for adapting the evms and lvm configuration files to use the multipath devices.
For more information, please refer to the description above the particular steps, inside the scripts.
Related Articles
Oct 01st, 2024
SUSE Documentation redefined — Meet the new doc portal
Feb 15th, 2023
Ransomware Attacks – Part 3, Container Security
Jul 18th, 2023
No comments yet