Introducing Gauger, a lightweight tool for visualizing
performance changes that occur as software evolves.
Regression testing is a well- established technique to detect both the introduction of new bugs
and the re-introduction of old bugs.
However, most regression tests focus
exclusively on correctness while ignoring
performance. For applications with
performance requirements, developers
run benchmarks to profile their code in
order to determine and resolve bottlenecks. However, unlike regression tests,
benchmarks typically are not executed
and re-validated for every revision. As a
result, performance regressions sometimes
are not detected quickly enough.
Compared to correctness issues,
performance regressions can be harder
to spot. An individual absolute perfor-
68 | SEPTEMBER 2011 WWW.LINUXJOURNAL.COM
mance score rarely is meaningful;
detecting a performance regression
requires relating measurements to
previous results on the same platform.
Furthermore, small changes in external
circumstances (for example, other pro-
cesses running at the same time) can
cause fluctuations in measurements
that then should not be flagged as
problematic; this makes it difficult to set
hard thresholds for performance scores.
Also, good measurements often take
significantly longer than correctness
tests. Performance improvements in
one area may cause regressions in
others, causing system architects some-
times to consider multiple metrics at the
same time. Finally, performance can be