Tuning IPoIB for SUSE Linux Enterprise Server 12

Share
Share

As a serious technology geek, I have a rather significant home lab environment.  While the environment is sizable, I’m not just oozing extra cash and consequently I looked for alternatives for high speed networking for my SUSE Enterprise Storage setup.  What I ended up with is a cluster furnished with dual port QDR Infiniband cards and a QDR Infiniband switch.

IBpicTo support Infiniband hardware in SUSE Linux Enterprise server, all you need is to add the OFED pattern from the installer or software management interface under YaST.  If you are a minimalist, you can just zypper in rdma as that actually provided everything I needed to bring my ConnectX-2 adapters online.

After that, I added the interfaces under the network devices (type=Infiniband, mode=connected, MTU=64k), enabled the subnet manager on my switch and was off to the races.  Or so I thought.
After standing everything up, like any good geek, the first thing I did were some performance tests.  Unfortunately, instead of seeing anything approaching 40Gbs (or better than 20Gbps as message threads in various boards made me expect), I was maxing out around 12-13Gbps.  This was measured using iperf from software.opensuse.org.

This sent me looking for every performance tweak that needed to happen.  I’m not sure I found them all, but here are a few that I did.

The first was to get into the server’s BIOS and change the settings with a heavy bias towards performance instead of power friendliness.  This alone accounted for a 4Gbps bump in performance.  Settings included the cpu power profile, the turbo boost profile, etc.

The second step was to start making software adjustments.  Mellanox provides a pretty good guide for tuning the stack that helped quite a bit.  Here are the adjustments I made.

 

echo “net.ipv4.tcp_timestamps=0”>>/etc/sysctl.conf
echo “net.ipv4.tcp_sack=1”>>/etc/sysctl.conf
echo “net.core.netdev_max_backlog=250000”>>/etc/sysctl.conf
echo “net.core.rmem_max=4194304”>>/etc/sysctl.conf
echo “net.core.wmem_max=4194304”>>/etc/sysctl.conf
echo “net.core.rmem_default=4194304”>>/etc/sysctl.conf
echo “net.core.wmem_default=4194304”>>/etc/sysctl.conf
echo “net.core.optmem_max=4194304”>>/etc/sysctl.conf
echo “net.ipv4.tcp_rmem=”4096 87380 4194304″”>>/etc/sysctl.conf
echo “net.ipv4.tcp_wmem=”4096 65536 4194304″”>>/etc/sysctl.conf
echo “net.ipv4.tcp_low_latency=1”>>/etc/sysctl.conf

These changes allowed the throughput to exceed 20Gbps.  There is one other setting that I made that would allow the throughput to peak slighty over 23Gbps.

ibfast.sh
 #!/bin/bash
 LOCAL_CPUS=`cat /sys/class/net/ib0/device/local_cpus`
 echo $LOCAL_CPUS > /sys/class/net/ib0/queues/rx-0/rps_cpus 

As you see, I have the last one in a script for the time being. This is because I am still looking for other tunings to see if I can push the performance even higher and what effects they have on each other.  After all is said and done, these settings will be added to my standard options for my IB connected systems.

If you have Infiniband tuning experience and would like to add it to the comment thread, please do so.

Share
(Visited 16 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
9,070 views