WOPR28

Henrik Rexed the WOPR28 Content Owner, and the WOPR Organizers invite you to apply for WOPR28, hosted by Neotys in Marseille on May 29-31, 2019. The traditional Pre-WOPR Dinner will take place on Tuesday, May 28th.

Many thanks to Neotys for their support of WOPR, and for supporting knowledge sharing on the practice of performance testing. For some more background on these subjects, you could visit these links: https://www.neotys.com/insights/performance-testing
https://www.neotys.com/insights/load-testing

Automated Performance Testing

Load and Performance testing generates simulated traffic to examine the performance characteristics of a system. Typically, the major steps are:

  • Examine the workload of a system to derive the activities of the system, and their quantities/frequencies
  • Create scripts and supporting data that simulates (some of) these activities and measure their responsiveness. Create load models that group these scripts on order to create simulations that will exercise a system at a desired level of activity, usually (hopefully!) similar to what is expected to occur in production
  • Configure measurement of components and resources, possibly adding logging or additional checks to closely examine systems while simulations are running.
  • Execute these simulations, watching closely for any evidence that the results of the test are fouled or distorted by errors in the system under test, or in building the simulation
  • Evaluate the results of these simulations, assign them context and meaning, and then report on results

Performance engineers may launch tests to:

  • Detect performance and functional software regressions with automated testing
  • Validate the ability of a system to support an expected production load
  • Determine the capacity and resource limitations of the system
  • Explore the system’s ability to handle and recover from crashes, network interruptions, failovers, and other events that occur in complex systems
  • Experiment with changes to the system’s components, tuning, and/or resources

These tests rely on accuracy in modeling system activities, their quantities, and their frequencies. If the models are not accurate, the test results are unhelpful and perhaps even deceptive.  Historically, load testers manually selected activities to simulate and carefully built bespoke load models. With today’s faster release cycles, teams are seeking to automate performance tests to frequently examine performance in fast release cycles.

The automation of performance test execution in Continuous Integration (CI) pipelines is common, but sophisticated analysis remains mostly manual, and test design is still almost entirely so. Performance testing in CI is typically focused on detecting regressions with simple and small loads to keep the load scripts durable and evaluation easy. But what if we could do more sophisticated load testing automatically?

Today, Observability – collecting detailed information about every transaction in Production – is pursued as a technique for both troubleshooting and learning more about how users interact with a system. We realize another benefit: as our ability to observe production users improves, our ability to generate test automation from production traffic improves, including in our load models.

WOPR28’s Content Owner is Henrik Rexed.

At WOPR28, we want to hear about your experiences with Automating Performance Testing.

The emergence of technologies such as Application Performance Management (APM), Artificial Intelligence (AI), and Machine Learning (ML) make it easier to imagine a path to automated performance testing – leveraging production data to automatically create load models and test scripts, and make go/no-go decisions based on automatic result analysis.

With Observability techniques, it’s easy to see how we could have the opportunity to directly connect our load models to data from production. But what if we could generate load scripts from the data we gather in production?

We need more than URLs to generate Test Scripts – we need the relevant data around the user and the data to create the script. For stateful and interesting web applications, log files were never going to be enough information to get there. But with the additional data captured by modern observability systems, plus session tracing, added to the capability of analysis across large log data sets can help us reach models reflecting all of the complexity of a production workload.

Our load models have typically been pale and shallow compared to what really happens in production. We usually simplify the complex activities of a system enough that we can script a few of them and generate a similar load on the system we’re testing. This is still expensive and time consuming – we could never script all of the activities in a system, or model them in enough detail to really recreate the real world with manual techniques.

AI/ML have recently brought techniques for sophisticated statistical analysis within reach of many engineers. Combined with the detailed detection capabilities of APM tools, we are closer to automated analysis of performance test results than ever.

APM and Observability techniques are sophisticated enough that they can detect crashes and errors in real time with great fidelity. What if these could be used to recreate eventful or significant moments for debugging?

There are some clear challenges to realizing these opportunities. Production data is stored in a variety of formats at the component level throughout the solution. Log centralization and monitoring systems start out with precise data, then aggregate it over time to save index performance and storage space cost. These aggregations lose the level of detail necessary to reconstruct individual sessions – and that is the level of detail necessary to meaningfully aggregate and analyze observability data into a load model and scripts.

Conference Location and Dates

WOPR28 will be hosted by Neotys in Marseille on May 29-31, 2019. The traditional Pre-WOPR Dinner will take place on Tuesday, May 28th.

If you would like to attend WOPR28, please submit your application soon. We may not be able to add any more attendees at this point (Mid-April) unless there are cancellations from current invitees.

About WOPR

WOPR is a peer workshop for practitioners to share experiences in system performance and reliability, allow people interested in these topics to network with their peers, and to help build a community of professionals with common interests. Participants are asked to share first-person experience reports which are then discussed by the group. More information about Experience Reports is available at http://www.performance-workshop.org/experience-reports/.

WOPR is not vendor-centric, consultant-centric, or end user-centric, but strives to accommodate a mix of viewpoints and experiences. We are looking for people who are interested in system performance, reliability, testing, and quality assurance.

WOPR has been running since 2003, and over the years has included many of the world’s most skillful and well-known performance testers and engineers. To learn more about WOPR, visit our About page, connect at LinkedIn and Facebook, or follow @WOPR_Workshop.

Costs

WOPR is not-for-profit. We do ask WOPR participants to help us offset expenses, as their employers greatly benefit from the learning their employees can get from WOPR. The expense-sharing amount for WOPR28 is €300 Euro, collected via PayPal. If you are invited to the workshop, you will be asked to pay the expense-sharing fee to indicate acceptance of your invitation. We are happy to discuss the fee.

Applying for WOPR

WOPR conferences are invitation-only and sometimes over-subscribed. For WOPR28, we plan to limit attendance to about 20 people. We usually have more applications and presentations than can fit into the workshop; not everyone who submits a presentation will be invited to WOPR, and not everyone invited to WOPR will be asked to present.

Our selection criteria are weighted heavily towards practitioners, and interesting ideas expressed in WOPR applications. We welcome anyone with relevant experiences or interests. We reserve seats to identify and support promising up-and-comers. Please apply, and see what happens.

The WOPR organizers will select presentations, and invitees will be notified by email according to the above dates. You can apply for WOPR28 here.

© 2003-2023 WOPR Frontier Theme Vlone