WOPR5

alcatel

WOPR5 was held in Ottawa, Ontario, Canada on October 6-8, 2005, and was hosted by Alcatel. Rob Sabourin was the Content Owner.

Attendees

Henri Amistadi, Calvin Arneson, Scott Barber, Andrew Bell, Paul Carvalho, Ross Collard, Alan Dunham, Dan Fowler, Ehab Hari, Ahmed Hamza, Julian Harty, John Hazel, Andy Hohenner, Paul Holland, Pam Holt, Maurice Mancini, David Miller, Mike Pearl, Timmons Player, Raymond Rivest, Rob Sabourin, Serge Simard, Keith Stobie, Bernie Velivis, Steven Woody

Screen Shot 2014-10-21 at 11.56.33 PM

Theme: Experiences in High Availability Testing

Questions we are interested in exploring at WOPR5 include:

– How well has testing been able to predict availability?

– Has testing been able to adequately predict shortfalls and losses of availability, pinpoint where the losses are likely to occur and under what conditions, and provide adequate feedback to guide the problem resolution (typically tuning and debugging)?

– Under what circumstances has testing inaccurately predicted availability?

WOPR5 will give working practitioners (testers) a chance to examine, discuss and brainstorm these questions.

Our Objectives (What we are Seeking in this CFP)

WOPR5 is seeking experience reports (ERs) of your relevant experiences and innovations from past projects, and from your current initiatives (as-yet unfinished and unproven projects). For a description and samples of ERs, see www.performance-workshop.org.

We are more interested in effective presentations and enlightening exchanges than in formal papers. A detailed paper is welcome though not required. For your presentation, an organized outline is enough. We are looking for informative, in-depth story telling by experienced practitioners. Your proposal to present should contain enough substance for us to understand and evaluate it. Content is more important than format. Your presentation should omit any confidential data (anything that requires an NDA).

We are interested in ERs for both successful and failed attempts to predict availability though testing. If you have valuable experiences to share about using test tools, technologies, techniques and models to predict availability effectively, please contact us.

Reports and presentations are welcome over a broad range of topics related to availability testing, as long as they demonstrate useful lessons about this type of testing. These topics might include negotiating service level agreements, developing operational profiles, building system models, building automated test frameworks and test tools, generating test data, handling system scalability in the test lab, tuning systems for availability, and interpreting test results to predict live availability. The test domain is also broad, and may include real-time embedded devices, web sites, and international telecom networks.

We also are seeking progress reports on new approaches being applied (innovation reports or IRs), to stimulate dialogue and brainstorming about the pros and cons of these new approaches.

Specific Questions for You to Address

In addition to the major questions above, you are likely to be asked other questions too. These concern the details of your own particular test strategies and project outcomes. Plan to address the relevant ones in your presentation.

The following list of questions is long enough that it may be intimidating. Bear in mind that these are just examples. Questions on your presentation may include:

– What were the test objectives and scope? How were these established?

– How did your SLA terms and conditions map to the system behavior?

– What regulatory requirements if any did you need to comply with? How were these accommodated?

– How did you set up the test environment?

– What did you measure to predict availability?

– How did you adjust the collected data to allow for the differences between the test lab and the live environment?

– How confident were you with your test results?

– How did you use test results to predict availability?

– How did you compute predicted availability?

– Did you produce any false or misleading test results? How did you know?

– How did you validate the results and determine the likely errors in prediction?

– Did the test project proceed according to plan? How were deviations from plan and surprises handled?

– What worked? What did not?

– How did you do or run the testing?

– When did you perform availability testing?

– Which tools were used?

– How was modeling used?

– How was exploratory testing and experimentation used?

– Why did you select the approach that you used?

WOPR5 will give working practitioners (testers) a chance to examine, discuss and brainstorm these questions.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2003-2023 WOPR Frontier Theme Vlone