WOPR13

logo_webmd

WOPR13 was held in Portland, Oregon, USA on October 15-17, 2009, and was hosted by WebMD. Eric Proegler was the Content Owner.

Attendees

Calvin Arnesen, James Bach, John Belbute, Mike Bonar, David Chadwick, Ross Collard, Dan Downing, Robert Garrison, Morven Gentleman, Dan Gold, Andy Hohenner, Paul Holland, Masud Khandker, Richard Leeke, Greg McNelly, Jude McQuaid, Jason Nash, Amit Patel, Eric Proegler, Raymond Rivest, Brad Stoner, Hein van den Heuvel, Mohit Verma, Nick Wolf, Steven Woody

Theme: Performance Rules of Thumb

Many testers develop job aids such as models, frameworks and tools. These aids often incorporate rules of thumb, or heuristics, which help monitor progress and quickly identify and resolve problems. Rules of thumb are similar to what have been called “Testing Smells”; practitioners use them to plan and conduct performance tests, analyze test results, and examine live systems.

We encounter familiar situations in our work every day, repeatedly deploying pre-determined solutions without thinking much about them. Our biases (often unrecognized) and our experiences drive our actions. Since these rules seem to be correct, or at least rarely proven wrong, we maintain them as valuable artifacts.

How well do these types of rules work for you? Where do you trust your rules-based judgment to solve problems without undue anxiety? Equally important, where have you been surprised? How do you decide what and how to test, your priorities, and when you are done?

At WOPR13, we will discuss these questions with professionals and experts in software and hardware performance, scalability, reliability, etc. We will review specific first-person experiences, compare findings and share intuitions.

We hope to learn more about how to conduct effective, credible and timely performance testing. We encourage insightful, talented people of varied experience levels and backgrounds to apply. Even if you do not believe you have a relevant experience, we welcome people who work in performance and reliability testing disciplines to contribute to the discussion.

Please bring us your first-person experience report that addresses one or more of these questions:

1.     What rules do you use to plan and conduct performance testing?

2.     How reliable are your rules, and what are the exception cases?

3.     How do they help predict how systems scale – or fail?

4.     What happens if we apply them incorrectly?

5.     What are the characteristics of situations you have encountered where you have seen them before and neatly step ahead to plan, diagnose, an/or solve?

6.     Are there a limited number of ways response time and other measurements change as load increases?

7.     How many failures invalidate a performance test?

8.     What do you know before you start?

9.     What rules are you developing right now?

10.   What rules have you recently abandoned?

Depending on what emerges during the workshop, WOPR may publish content developed for and during the workshop on the WOPR website, as part of an ongoing effort to support and grow the community of performance testing.

Work Product

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2003-2023 WOPR Frontier Theme Vlone