A facilitated discussion was started by listing areas where it is difficult to apply performance testing techniques to iterative development. Some discussion followed each point, but they are listed here a as a collection of some of the difficulties we face in trying to provide performance feedback early and often. Any attendee (or website visitor!) is invited to expand on any of these points here or elsewhere.
o Rapid Iterative testing before release candidate is ready (maybe fixes across versions)
o No time set aside for performance consideration (executions, analysis, instrumentation, reaction, etc.)
o Code not ready to test till end of sprint
o Feature / scope creep
o High cost/frequency in script maintenance across releases
o Monkey testing (Ok, try it now!)
o Availability of sufficient test data (identification and creation)
o Insufficient documentation of APIs
o Investment in performance testing but not remediation
o Lack of performance acceptance criteria
o Unclear / unidentified dependencies, integration points and constraints
o Biased, invalidated architecture decisions and commitments
o More logic justifies slower performance. Or will be fixed in next release.
o Lack of requirement preparation (pass/fail) before start
o Lack of prototyping
o PMO lacking authority or confidence
o Frequently changing priorities
o Some work takes longer than sprint time
o Reaction prioritization of performance testing
o People mistaking strategy for tactics and goals for a plan
o Performance not treated as importantly as functional requirements
o Explaining complexity to support change involved
o Lack of training investment with changing stack
o Minimum viable bug fix – lack of long term vision
o Process and development practice assuming happy path scheduling
o Bias towards small changes
o Performance issues are more likely to require architecture changes than others
o Settle for “good enough” attitude – less appetite for bug fixes
o Backlog makes it difficult to address pervasive long term problems.
This work is the product of all of the Workshop on Performance and Reliability (WOPR22) attendees. WOPR22 was held May 21-23, 2014, in Malmö, Sweden, on the topic of “Early Performance Testing”. Participants in the workshop included Fredrik Fristedt, Andy Hohenner, Paul Holland, Martin Hynie, Emil Johansson, Maria Kedemo, John Meza, Eric Proegler, Bob Sklar, Paul Stapleton, Andy Still, Neil Taitt, and Mais Tawfik Ashkar.