SUSE Support

Here When You Need Us

Increasing individual thread throughput using CFQ on SLE 12

This document (7015930) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 12

Situation

SLE 12 introduced a number of I/O improvements that enhance thread fairness, predictability of I/O behavior and minimize I/O stalls. The bulk of these changes are transparent to the user but there are exceptions.
This document discusses a change in the default low_latency parameter in the CFQ I/O scheduler.
Users of the noop and deadline scheduler are  unaffected by this TID.

Resolution

Broadly speaking, the CFQ I/O scheduler is a fair scheduler that divides time between processes executing I/O.  In SLE-12, the low_latency tuning parameter is enabled by default meaning that the CFQ I/O scheduler may dynamically adjust the length of the time slice by aiming to meet a tuning parameter called the target_latency.  Time slices are recomputed to meet this target_latency and ensure that processes get fair access within a bounded length of time.

This is usually preferred in a server scenario where processes are executing I/O as part of transactions the time to complete each transaction will be predictable. The exception is if the performance metric of interest is the peak performance of a single process when there is I/O contention. Another exception is if a workload must complete as quickly as possible and there are multiple sources of I/O. In the latter example, unfair treatment from the I/O scheduler may complete the transactions faster as processes take their full slice, exit quickly and reduce overall contention. In cases like these, SLE11-SP3 will appear to perform better.

To address this, there are two options -- increase target_latency or disable low_latency. As with all tuning parameters it is important to verify your workload behaves as expected before and after the tuning modification. Take careful note of whether your workload depends on individual process peak performance or scales better with fairness. It should also be noted that the performance will depend on the underlying storage and the correct tuning option for one installation may not be universally true.

Find below an extremely crude example that does not control when I/O starts but is simple enough to just demonstrate the point.. In both cases, 32 processes are writing a small amount of data to disk in parallel

$ echo 1 > /sys/block/sda/queue/iosched/low_latency
$ time ./dd-test.sh
10485760 bytes (10 MB) copied, 2.62464 s, 4.0 MB/s
10485760 bytes (10 MB) copied, 3.29624 s, 3.2 MB/s
10485760 bytes (10 MB) copied, 3.56341 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.56908 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.53043 s, 3.0 MB/s
10485760 bytes (10 MB) copied, 3.57511 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.53672 s, 3.0 MB/s
10485760 bytes (10 MB) copied, 3.5433 s, 3.0 MB/s
10485760 bytes (10 MB) copied, 3.65474 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.63694 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.90122 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.88507 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.86135 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.84553 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.88871 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 3.94943 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 4.12731 s, 2.5 MB/s
10485760 bytes (10 MB) copied, 4.15106 s, 2.5 MB/s
10485760 bytes (10 MB) copied, 4.21601 s, 2.5 MB/s
10485760 bytes (10 MB) copied, 4.35004 s, 2.4 MB/s
10485760 bytes (10 MB) copied, 4.33387 s, 2.4 MB/s
10485760 bytes (10 MB) copied, 4.55434 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.52283 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.52682 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.56176 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.62727 s, 2.3 MB/s
10485760 bytes (10 MB) copied, 4.78958 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.79772 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.78004 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.77994 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.86114 s, 2.2 MB/s
10485760 bytes (10 MB) copied, 4.88062 s, 2.1 MB/s

real    0m4.978s
user    0m0.112s
sys     0m1.544s


This is the SLE-12 default settings and note that each process completes in similar times. This is the CFQ I/O scheduler meeting its target_latency giving each process gets fair access to storage. The early processes completed faster as the start time of the processes was not identical. This could be controlled for but not in a simple example.

This is what happens when low_latency is disabled.

$ echo 0 > /sys/block/sda/queue/iosched/low_latency
$ time ./dd-test.sh
10485760 bytes (10 MB) copied, 0.813519 s, 12.9 MB/s
10485760 bytes (10 MB) copied, 0.788106 s, 13.3 MB/s
10485760 bytes (10 MB) copied, 0.800404 s, 13.1 MB/s
10485760 bytes (10 MB) copied, 0.816398 s, 12.8 MB/s
10485760 bytes (10 MB) copied, 0.959087 s, 10.9 MB/s
10485760 bytes (10 MB) copied, 1.09563 s, 9.6 MB/s
10485760 bytes (10 MB) copied, 1.18716 s, 8.8 MB/s
10485760 bytes (10 MB) copied, 1.27661 s, 8.2 MB/s
10485760 bytes (10 MB) copied, 1.46312 s, 7.2 MB/s
10485760 bytes (10 MB) copied, 1.55489 s, 6.7 MB/s
10485760 bytes (10 MB) copied, 1.64277 s, 6.4 MB/s
10485760 bytes (10 MB) copied, 1.78196 s, 5.9 MB/s
10485760 bytes (10 MB) copied, 1.87496 s, 5.6 MB/s
10485760 bytes (10 MB) copied, 1.9461 s, 5.4 MB/s
10485760 bytes (10 MB) copied, 2.08351 s, 5.0 MB/s
10485760 bytes (10 MB) copied, 2.28003 s, 4.6 MB/s
10485760 bytes (10 MB) copied, 2.42979 s, 4.3 MB/s
10485760 bytes (10 MB) copied, 2.54564 s, 4.1 MB/s
10485760 bytes (10 MB) copied, 2.6411 s, 4.0 MB/s
10485760 bytes (10 MB) copied, 2.75171 s, 3.8 MB/s
10485760 bytes (10 MB) copied, 2.86162 s, 3.7 MB/s
10485760 bytes (10 MB) copied, 2.98453 s, 3.5 MB/s
10485760 bytes (10 MB) copied, 3.13723 s, 3.3 MB/s
10485760 bytes (10 MB) copied, 3.36399 s, 3.1 MB/s
10485760 bytes (10 MB) copied, 3.60018 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.58151 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.67385 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.69471 s, 2.8 MB/s
10485760 bytes (10 MB) copied, 3.66658 s, 2.9 MB/s
10485760 bytes (10 MB) copied, 3.81495 s, 2.7 MB/s
10485760 bytes (10 MB) copied, 4.10172 s, 2.6 MB/s
10485760 bytes (10 MB) copied, 4.0966 s, 2.6 MB/s

real    0m3.505s
user    0m0.160s
sys     0m1.516s


Note that the time processes take to complete is spread much wider as processes are not getting fair access. Some processes complete faster and exit allowing the total workload to complete faster and some processes measure higher apparent I/O performance. It is also important to note that this example may not behave similarly on all systems as the results depend on the resources of the machine and the underlying storage. Unfortunately, describing a representative test that works regardless of hardware is outside the scope of this TID.

It is important to emphasise that neither tuning option is inherently better than the other. Both are best in different circumstances and it is important to understand the requirements of your workload and tune accordingly.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7015930
  • Creation Date: 28-Nov-2014
  • Modified Date:03-Mar-2020
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.