WOPR17

HP

WOPR17 was held in Cupertino, California on October 20-22, 2011, and was hosted by HP. Richard Leeke was the Content Owner.

Attendees

AJ Alhait, Scott Barber, Goranka Bjedov, Jeremy Brown, Ross Collard, Dan Downing, Craig Fuget, Dawn Haynes, Doug Hoffman, Paul Holland, Pam Holt, Ed King, Richard Leeke, Emily Maslyn, Yury Makedonov, Greg McNelly, John Meza, Blaine Morgan, Mimi Niemiller, Eric Proegler, Raymond Rivest, Bob Sklar, Susan (Xiaodong) Song, Roland Stens, Nishi Uppal, Greg Veith, John Yao

Theme: Finding Bottlenecks

How often do you find and resolve a performance bottleneck, only to find that after increasing that resource, the system throughput or response time is virtually unchanged, with the constraint simply having shifted to another component?  Have you ever seen developers optimize the wrong component, based on an assumption about where the performance limiting step must be?  Have you experienced software or service providers pointing the finger at each other over system performance problems? Have you found systems instrumented at insufficient granularity to enable rapid and unambiguous identification of bottlenecks?

By sharing experiences about effective (or ineffective) monitoring, results analysis and diagnostic techniques we will explore ways to root out bottlenecks more reliably.

Performance testing isn’t just about measuring response times.  Helping to facilitate the diagnosis and resolution of performance issues is often the highest value of the role as well as the most challenging.  When planning a performance test, it is important to include time to intelligently employ instrumentation and analysis tools. Performance testers and supporting teammates should have the necessary skills to use these tools to collect information and conduct analysis.

Tuning is often viewed as a serial activity:  Identify and resolve the primary bottleneck, then retest to see where the next bottleneck appears.   With careful planning, it is sometimes possible to conduct tests which surface multiple bottlenecks in parallel.

At WOPR17, we will explore all aspects of identifying and resolving bottlenecks. We are looking for experience reports that will advance our community’s understanding of this subject. Here are some questions intended to spark your thinking:

  • How do you select instrumentation points and monitoring to surface bottlenecks?
  • What tools and techniques do you use for conducting analysis and diagnosis?
  • What experimental diagnostic tests have you used to locate bottlenecks?
  • Do you have a library of tell-tale charts depicting different classes of constraints?
  • How can you start looking for the next bottleneck before the first one has been resolved?
  • How do you go about ensuring that all interested parties understand and agree on the cause of the performance constraints?

These experiences may come in all shapes and sizes, perhaps touching on one or more of the following subject areas:

  • Monitoring
  • Instrumentation
  • Analysis and visualization tools
  • Diagnostic techniques
  • Staffing your team
  • Diagnostic planning
  • Facilitation skills

As you prepare your experience, consider the following focusing questions:

  • Why is it often hard to identify a bottleneck?
  • Why do people often jump to false conclusions?
  • What is the cost to the organisation of a protracted diagnostic and remediation process?
  • Conversely, what is the value to the organisation of an efficient diagnostic and remediation process?

Work Product

 

Private workshop collaboration space here.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2014 WOPR Frontier Theme