SUSE Support

Here When You Need Us

iSCSI Gateway performance tuning for VMware environments.

This document (7023053) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Enterprise Storage 5
VMware ESXi


Situation

Tuning considerations to improve iSCSI performance in VMware ESXi environments.

Resolution

A. Disable SCSI Reservation support

1. On the VMware server, make sure only ATS locking is enabled on the relevant Volume(s). To verify current settings on the VMware server run:

:~] esxcli storage vmfs lockmode list
Volume Name     UUID                                 Type    Locking Mode  ATS Compatible  ATS Upgrade Modes  ATS Incompatibility Reason
--------------  -----------------------------------  ------  ------------  --------------  -----------------  ---------------------------
datastore_ssd   5ae70fae-4f913adc-77aa-ac1f6b6442f0  VMFS-6  ATS+SCSI               false  None               Device does not support ATS
datastore_spin  5ae723bd-a64fe2fc-bb36-ac1f6b6442f0  VMFS-6  ATS+SCSI               false  None               Device does not support ATS
SESRBD        5afd58c3-13f582a6-596a-ac1f6b6442f0  VMFS-6  ATS                     true  No upgrade needed

In the above example the SESRBD volume shows as only having ATS locking enabled. For more information on how to upgrade volumes to ATS only see the relevant VMware documentation.

2. Since kernel release 4.4.175-94.79.1 the advanced iSCSI option "backstore_emulate_pr" is available to disable SCSI Reservations, by setting this value to "0". See the SES online documentation for more information.

3. To confirm that SCSI persistent reservations are properly disabled, something like the following example command can be used on the iSCSI Gateway node:

# find /sys/kernel/config/target/core/ -name emulate_pr -ok cat '{}' ';'

The command will return something like the following output in turn for each of the mapped RBDs, in the below example for "rbd0":

< cat ... /sys/kernel/config/target/core/rbd_0/rbder/attrib/emulate_pr > ?

Now entering "y" and pressing enter will then return the current value, in the below case it is disabled:

0

B. VMware MaxIOSize

1. By default ESXi restricts iSCSI I/O to a maximum of 128k, this should be increased to the maximum of 512k. To list the current setting run from the VMware server:

:~] esxcli system settings advanced list -o /ISCSI/MaxIoSizeKB
   Path: /ISCSI/MaxIoSizeKB
   Type: integer
   Int Value: 512
   Default Int Value: 128
   Min Value: 128
   Max Value: 512
   String Value:
   Default String Value:
   Valid Characters:
   Description: Maximum Software iSCSI I/O size (in KB) (REQUIRES REBOOT!)


In this example the value is already increased to the maximum of 512k "Int Value: 512".

2. To change the value from the default of 128k run from the VMware console:

:~] esxcli system settings advanced set -o /ISCSI/MaxIoSizeKB -i 512

As is stated, it is required to reboot the VMware host for the setting to take effect.

C. RBD (Rados Block Device) image object-size

The default object-size for RBD images is 4MB. Since VMware limits the iSCSI I/O to a maximum of 512k, images to be used against VMware should be created with an object size of 1MB instead. Reducing the object size too much (e.g. matching the VMware maximum size of 512k) will result in a much higher number of RADOS objects per RBD.

This can be changed during creation of the image, via openATTIC by adjusting the "Object size" parameter or if the image is created via the command line using:

rbd create $pool_name/$image_name --size xxxxM/G/T --object-size 1M

D. Use Round Robin (RR) multipath policy

By default VMware uses the Most Recently Used (MRU) multipath policy. The RR policy however allows the initiator to fully utilize the maximum iSCSI session queue depth across all paths. To see the current MPIO settings on the VMware server run:

:~] esxcli storage nmp device list
...
   Device Display Name: SUSE iSCSI Disk (naa.60014057ea3725b57a242008f1ee86c4)
   Storage Array Type: VMW_SATP_ALUA
   Storage Array Type Device Config: {implicit_support=on; explicit_support=on; explicit_allow=on; alua_followover=on; action_OnRetryErrors=on; {TPG_id=0,TPG_state=AO}}
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=100,bytes=10485760,useANO=0; lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config: iops=100
   Working Paths: vmhba64:C1:T0:L0, vmhba64:C0:T0:L0
   Is USB: false
...

In the above example the policy is already correctly set to use RR: "Path Selection Policy: VMW_PSP_RR"

For more information on how to see / modify VMware multipath policy settings, see VMware KB article 1017760.

E. Disable ATS for heartbeat I/O

VMware 5.5 Update 2 and later by default uses ATS for heartbeat I/O for VMFS 5 and higher datastores. To see details on how to view and configure VMware to use SCSI write operations instead for this see VMware KB article 2113956.


F. Jumbo Frames

Make sure jumbo frames are enabled on all devices in the communication chain, this includes Network Interface Card settings on the VMware server, all cluster nodes and any switches in-between.

To check / set this on SUSE Linux Enterprise Servers, verify the "MTU=' '" line for the relevant network interfaces in "/etc/sysconfig/network/ifcfg-<dev>".

To verify Jumbo frames are properly configured, run a ping from the client to any one of the cluster nodes as follows:

:~ # ping -M do -s 8972 $insert_destination_IP

G. CPU C-States

1. In the BIOS of all the storage nodes ensure that the servers are configured for maximum performance rather than efficiency.

2. On the SUSE Enterprise Storage servers install the "tuned" package and configure it for "throughput-performance" by taking the following steps:

NOTE: Since DeepSea version 0.8.6 tuned are now by default pushed out during stage 3 with the proper tuned profile for the nodes based on their assigned roles.
2a. Install tuned with: zypper in -y tuned
2b. Edit "/etc/tuned/active_profile" and add the line:

throughput-performance

2c. Enable and start the tuned service: systemctl enable tuned.service && systemctl start tuned.service
2d. Verify the profile is properly in use by running: tuned-adm active

Cause


Additional Information

It may also be useful to test if increasing some of the iSCSI configuration settings below to 256 and/or 512 shows a performance improvement.

A. On the Targets (iSCSI GW nodes):

tpg_default_cmdsn_depth = 512 (64)
backstore_hw_queue_depth = 512 (Default 128)
backstore_queue_depth = 512 (Default 128)

For more information on the above target settings, see the online iSCSI GW Documentation.

B. On the initiators (VMware hosts):

MaxCommands = 512 (Default 128)

See the VMware documentation for details on how to change the setting.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7023053
  • Creation Date: 06-Jun-2018
  • Modified Date:03-Mar-2020
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.