Loop Perforation (2019)
The original loop perforation paper, "Managing Performance vs. Accuracy Trade-offs with Loop Perforation", from ESEC/FSE'11, takes this idea to a beautifully flippant extreme: look at some loops, and replace something like i++ with i += 2 or even i += 21. We use functionality from llvm::Loop::Print() to get the name of each loop, which includes which basic blocks are included in the loop as well as their role within the loop. Here, we follow the lead of the original loop perforation paper in making a greedy assumption-that we can choose an final loop perforation strategy based on joining the maximum perforation rate for each loop that that was below the error tolerance. Now, an enterprising reader might have picked up on the fact that while we claim that loop perforation will make your code run faster, we actually executed your entire executable way more times in order to find perforation rates that won't crash or completely destroy your output accuracy. In particular, in addition to not implementing backtracking when combining loop perforation rates, we do not use Valgrind or anything similar to detect memory errors in perforated runs, and instead rely only on process return code and the user-defined accuracy metrics. In some implementations of loop perforation, rather than modifying the induction variable directly the pass instruments an additional counter to each loop. Skipping iterations of the outer loop, which does not degrade performance, clearly undermines the intent of the benchmark, once again suggesting that loop perforation gains may be overfitted to the evaluation metrics they're trained on.