Meltdown and Spectre Performance

Share
Share

Ever since Meltdown and Spectre were disclosed, quite a few customers have been asking us for guidance on the performance impact of the mitigating patches for these hardware vulnerabilities: Do these patches impact performance? If so, by how much?

The first question is answered easily: all these mitigations impact performance – hardly noticeable for some, but for quite a few workloads this impact is actually noticeable.

Spectres, Delusions and Benchmarks

Quantifying the impact is harder. There has been a lot of buzz, of course. Online media have published results of some more or less ad-hoc benchmarking. End users of various operating systems and Cloud service providers have shared their personal evidence of regressions, ranging from very moderate all the way to absurdly high, such as an idle VM at some Cloud provider jumping to 60% CPU utilization after mitigation was rolled out. Some have to be accepted as inevitable side effects of the patches, while some are probably just bugs. And there are also probably quite a few people who are not seeing any noticeable impact at all (but statements like “I see no problem” tend not to get retweeted as much as catastrophic news :-).

In our own testing, we have seen benchmarks that regressed just a little, and we’ve seen some that regressed by 15% or even more. Because different benchmarks exercise different areas of an operating system, it is not surprising at all that they show different degrees of regression. What may be more surprising, though, is that most benchmarks showed a high variance in the relative impact across different CPU vendors and models, sometimes by a factor of 2 or more – so a benchmark that would show a 5% impact on one platform would show a 10% impact or more on another.

This is why we believe that it would be misleading if we provided customers with a table stating that workload X will suffer a regression of N percent. Given that microcode changes play a major part in the mitigation, the performance impact you experience will very much depend on what sort of hardware you have. And until there is a “final” set of microcode updates from all vendors, any performance measurements must be considered preliminary anyway.

At the end of the day, you will have to benchmark your workloads to find out how they are impacted.

Instead, let’s talk a little bit about what causes the performance hit, helping you understand how different classes of applications will be impacted differently, and to discuss the work SUSE is doing to recover some of the lost performance – mitigating the mitigation, so to speak.

Meltdown

Meltdown is probably the most straightforward, and the most severe. This vulnerability can be exploited, among others, by a rogue user space process to “read” kernel memory locations. It affects Intel CPUs, some ARM licensees, and IBM POWER to some degree. The only way to mitigate it is by changing the Operating System to “help” the CPU to forget its kernel address mapping whenever switching from kernel to user space – for example, when returning from a system call.

The patch set to do that on x86-64 is called PTI (Page Table Isolation), and adds (a lot of) code to the system call entry and exit code plus a few other places to fully isolate the kernel page tables from user space access. This involves removing all address mapping information about kernel memory from the CPU, and performing a flush on the address translation cache (called a TLB flush). This adds a more or less constant cost to each system call, interrupt, etc. The operation itself is not hugely expensive, but it’s not fast either.

Now, obviously, an application that spends a lot of time in user space doing a numerical computation, and comparably few system calls, will be hardly impacted by this moderate overhead that is added to the system call path. On the other hand, a process that does nothing but execute lots of very fast system calls in a tight loop will be very much impacted, because a million times almost no delay still adds up eventually. It is with the latter type of benchmark that we have seen the highest impact of our Meltdown patches.

Our patch set does include a slight optimization on Intel CPUs that relies on a somewhat recent CPU feature called “Process Context ID.” This ID can be used to speed up the page table isolation a bit. When returning from a system call, we still have to remove all mapping information about kernel memory – but we can avoid the other part of this operation, the TLB flush. In several of the benchmarks we have run, using the PCID feature would cut the performance impact by up to a half.

This PTI optimization is enabled by default when we find that the CPU supports it.

Spectre

This is where it starts to get very interesting. This is not so much a single vulnerability, but several rolled into one. There are two “variants” of Spectre that employ two different techniques. In some form, Spectre affects all current CPUs on which SUSE Linux products are supported.

In addition, these vulnerabilities may be exploited by a user process to attack the kernel (or a hypervisor); but they can also be exploited to attack another user process, or another guest. The latter type of attack is harder, but possible.

As a rule of thumb, full Spectre mitigation is not possible without microcode updates. But for some of its aspects, a software based mitigation is possible, and often these software based mitigations are less heavy-handed, and consequently tend to have a less dramatic impact on performance.

Spectre 1 exploits rely on the ability to have the CPU (or a hypervisor) speculatively access memory through pointers that can be controlled by the attacker. To a large degree the mitigation employed at the kernel and hypervisor level involve finding code that accesses user controlled pointers in an exploitable way, and protecting these with speculation barriers that flush the branch prediction logic, which is a costly operation. A primary example is the extended Berkeley Packet Filtering facility, which suffered a significant hit in the first round of updates.

SUSE Kernel developers are working on improving some of these changes to soften the impact where possible. For example, we expect eBPF performance to come back to almost the previous levels with the next round of kernel updates we are preparing, by replacing the (hardware) barrier with a software based mitigation.

Spectre 2 exploits rely on the ability to actively confuse the branch target prediction inside the CPU – essentially a cache that is used to predict where indirect calls will end up jumping to. By poisoning this cache, an attacker can cause the CPU to speculatively execute code at an address controlled by an attacker, with arguments controlled by the attacker.

Obviously, this vulnerability cannot just be addressed by finding bits of code that can be abused, like we did for Spectre 1 – because the number of combinations is virtually limitless.

The mitigation currently present in our update kernels is based on three new CPU features (introduced by the recent microcode updates), called IBRS, STIBP and IBPB. These allow the kernel to restrict branch prediction and/or flush the branch predictor’s state. This basically has to be done on every transition between different trust domains (user space to kernel, or one user process to another, or from one VM to another). All of this incurs another quite significant cost.

IBRS mostly impacts the performance of kernel code, however IBPB can also have an impact on application performance, because everything the processor learned about the applications branching patterns is flushed on a context switch.

As an alternative to IBRS, we are currently working on backporting an approach called “retpolines”. This name is a shortened form of “return trampolines”. With retpolines, the compiler is modified to emit a different machine code sequence for indirect calls – instead of a “call” instruction, this will execute a slightly convoluted “return” to the target address that CPUs prior to Skylake would not perform speculative execution for.

Obviously, replacing all indirect call instructions with retpolines will also result in a slowdown of the code – however, initial testing shows that the performance impact is noticeably less than that of IBRS.

Retpolines do not cure each and every aspect of Spectre 2, and not on all CPUs – they do help on Intel CPUs prior to Skylake, as well as on AMD processors. But even on those CPUs, microcode updates will still be required to take advantage of the mitigation offered by IBPB.

 

Future Work

The struggle to get this new class of vulnerabilities under control (without damaging everything else in the process) is not over. If this was a James Bond movie, we would be in the middle of a scene where the hero battles a squad of thugs on top of some high speed train. New information about details of these vulnerabilities (and specifically about Spectre) keep emerging; microcode updates are undergoing changes, and discussions are on-going in the kernel and other communities about the proper response to them.

As we learn more, we expect that our mitigating patches will evolve. In the first response, the focus of the Linux community has been on creating a defense against these vulnerabilities where possible. Subsequently, we are continuing to broaden these defenses, but we will also start to fine-tune our response, covering questions of functionality (like ease of use) and performance.

If you’re looking for information on the updates available for SUSE Linux Enterprise, please refer to our Support documents about Meltdown and Spectre mitigation regarding kernel and microcode as well as KVM and XEN.

Thanks

Many thanks to Vojtěch Pavlík and Jiří Kosina, and everyone else in SUSE Engineering without whom this blog posting would not have seen the light of day.

Share
(Visited 32 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
25,506 views