WOPR15 was held in San Jose, California on October 28-30, 2010, and was hosted by eBay. Mike Bonar was the Content Owner.
Jon Bach, John Belbute, Goranka Bjedov, Mike Bonar, Jeremy Brown, Ross Collard, Siddhartha Devadhar, Dan Downing, Thomas Frasher, Anil Gurijala, Julian Harty, Doug Hoffman, Andy Hohenner, Paul Holland, Pam Holt, Dave Jewell, Felipe Knorr Kuhn, Norma Lopez, Greg McNelly, Jude McQuaid, Derek Mead, Lars Olsson, Eric Proegler, Harsh Sinha, Leo Susanto, Hein van den Heuvel, Greg Veith, SR Venkatramanan
Theme: Building Performance Test Scenarios with a Context-Driven Approach
Every testing project has its own success criteria, and factors that affect how it will unfold over time, such as people, requirements, deadlines, budget, existing test plans and artifacts, available tools, and other external factors. In every project, these factors change; in other words, every project has its own context. To conduct the most effective performance tests, we need to be aware of the context in which we find ourselves, how it influences our actions and decisions, and how we can adapt our efforts to make our testing project more successful.
At WOPR15, we would like to focus on how the context of your testing project influenced your design and use of performance test scenarios. Performance test scenarios live throughout the entire lifecycle of a testing project, including planning, execution, and reporting, so your experience report could describe any part of a project, or all of it. Don’t be too concerned if you feel you don’t have a lot to say about performance test scenarios in particular, but you still have an interesting experience you wish to share. The theme is designed to help WOPR participants focus on areas where we can learn and explore how we react to and customize testing in response to context.
Some points to help you think about performance test scenarios:
· A performance test scenario is a description of an event or series of actions and events.
· Performance test scenarios may describe performance testing workloads, but they can also describe what activities we simulate, how many threads we run in the simulation, the environment we use to test, how we structure our test schedule, and who performs the tests.
· Performance test scenarios may describe what resources we monitor, which errors we record, which results we capture, how many tests we run, and how we measure success.
· A performance test scenario is also a “projection” of actions and events. In other words, a performance test scenario is “forward looking.” The tests we perform may or may not give us the results we want, and we are often surprised by the results we get.
· Your thinking and choices, as captured in performance test scenarios, may contain assumptions and value judgments you may not be aware of; or they may be very deliberate.
Some questions to help you focus your experience report:
· How did you choose the performance test scenarios for your project?
· Did the structure of your team influence the performance test scenario selection process?
· Did you document your performance test scenarios, and if so in what format?
· If you did not document performance test scenarios, how did you proceed, and how did that work for you?
· What kinds of things influenced how you executed your performance test scenarios?
· Did you drop some performance test scenarios and pick up others as the test progressed?
· Did your approach to performance test scenarios change as the project progressed?
· Where there any particular challenges associated with performance test scenarios?
· Did you present the results for your performance test scenarios individually or all together at the end of testing?
· How do we know we’ve chosen “good” performance test scenarios?
· How do we know we’ve chosen “enough” performance test scenarios?
Private workshop collaboration space here.