yum upgrade on one of computing nodes, reserved one core on this node for my iozone benchmark job and one core on the node before upgrade – the methodology explained previously. Using the same script  I created images presented below.
From this visualisation it’s easy to read that recent patches had negative impact on performance especially for
fread operations for small files (especially smaller than 32MB) and for
read and reread operations for files in the range 4MB – 32MB. However, when we check the difference deviation it’s sometimes bigger than the difference itself. This really stopped me from publishing those results a week earlier, because I was not sure how to interpret it. From my knowledge about those patches I thought that there must be a real impact, but honestly I was unable to tell that my benchmark proved anything.
The nice thing happened to me during the weekend. Jobs submitted by users on Friday finished on Saturday afternoon :). Thankfully no-one worked during the weekend and I was able to repeat the same test for the system without load (Actually, it was nearly no-load situation, since I’ve seen a few users transfering data to/from the filesystem and accessing lustre over CIFS gateway, nevertheless the load in terms of IOPS seen on the backend RAID controllers was under 10% of maximum). Results achieved during the run are depicted on the figure below.
This result is much clear, deviation of difference of bandwidths achieved for patched and unpatched nodes is low for the area of interest. A part of the results confirms difference seen during the 1st run. Strictly speaking for rewritte, read and reread of 4MB-16MB files we see bandwidth of 2GB/s in favour of unpatched server, even bigger difference is visible for frewritte and fread operation. What is interesting is that difference is bigger for small record sizes, meaning that with proper I/O handling we’re able to reduce the negative impact of patches.
Summing up results, surprisingly for vast range of parameters patched kernel performance is quite comparable to unpatched one – on the plot above it looks like beeing of the deviation magnitude. Comparing tests under load and without load we see that we can use statistical analysis of series of tests under load for qualitative assessment, even when results are quite volatile.
Let me also share nice pictures – averages of 20 runs of iozone test on lustre 2.10.2 with kernel versions of interest. See how smooth it is, thanks to iozone running as the only one application intensively using file system at that time ��