Meltdown / Spectre – a PM view
Now that some time has passed and we are through another round of patches / updates, here is a perspective on how handling the Meltdown and Spectre vulnerabilities looked.
As of now, Meltdown and Spectre are mostly behind us. Well maybe not from a chip manufacturer perspective, but it is from a SUSE update perspective. Times seem calm now, but it wasn’t that long ago that this was an ‘all-hands-on-deck’ activity. If you have been on vacation, disconnected, or were just on a personal news embargo, you can look up the details of these noteworthy security issues here:
https://meltdownattack.com/ and https://www.suse.com/support/kb/doc/?id=7022512
I should mention, I am a product manager at SUSE and I work with products that we updated to mitigate these chip level vulnerabilities.
This all started near the end of last year. The discovery, the initial sharing of information about the vulnerabilities, and the agreed upon industry embargo date, January 9th to get patches ready, was all in-place by the start of the year. Yet it was as early as January 3rd, six days early, that broad news coverage of the vulnerabilities started to appear. This generated a great deal of public anxiety and was a generally popular mainstream news topic. It also generated a good deal of discussion within the technology community if not finger pointing.
While early on some stated that others didn’t follow the embargo and that was why the news went public early, that may not have mattered even if it happened that way. Regardless of who first pushed the news, in looking at the activity during the embargo period, one thing is apparent, it is very difficult to address a broad security vulnerability in code without having it become noticed.
An engineer I work with made a comment about how too many upstream commits were happening with too little comment that had too much impact to ignore. The take-away, where were the flame wars? As this is not how things typically occur, it attracted notice. A lot of notice. Enough to cause examination of the updates which ultimately brought the underlying issue into view.
So here we were six days shy of the embargo date and the vulnerabilities were now known to the world. At SUSE, we did a review of the problem. Then we made plans and scheduled the activity to mitigate the issues. The overarching need, get the issues addressed ASAP.
Now as a PM, one of my specific areas of focus is in working with cloud service providers (CSPs). And specifically, a major activity we do for them is building on-demand and ‘bring-your-own-subscription’ (BYOS) images. For SUSE, these vulnerabilities hit my team across the board, all of the CSP images were targeted for updates. I will skip the internal all-hands-on-deck activity that went with what followed, but I do want to call out that every CSP image was updated and provided for deployment to our partner CSPs prior to the embargo date.
In hindsight, this was a crazy event, every roadmap item I had went to the back of the line. The mitigation activities trumped everything else. Virtually all of our work we kicked the year off with was not as planned the prior month. But there is a silver lining, this is a true testimonial to what it means to have Enterprise Grade Software.
Related Articles
Dec 09th, 2024
The Path to Cloud-Native Success with Fujitsu and SUSE
Feb 20th, 2024
No comments yet