WOPR22 Brainstorm: Illusion of Reality

Our last brainstorming exercise was an examination of the ways in which we acknowledge our performance model is likely to be inaccurate, compared to reality. Since much of performance testing centers on “realistic” simulation, and iterative development forces us to test faster with less information, it was instructive to review how we approach simulation fidelity. […]

WOPR22 Brainstorm: Optimizations

During WOPR22, we spend a few minutes before lunch collecting systems-based (as opposed to code) optimizations that the group had used in the past. This was time- and hunger-bound. This is not an exhaustive or ranked list, and does not include enough information to use these as heuristics. It might be a source of ideas. This […]

WOPR22 Practitioner Tool Survey

During WOPR22, we took an informal survey of tools that practitioners have used regularly over the last year. It should be remembered that tools are frequently chosen by non-practitioners for us, for reasons besides fitness for purpose. This survey does not meet any standard of statistical significance, and does not include many well-known tools – just the […]

WOPR22 Is Underway

WOPR22 started off with John Meza discussing the performance metrics that are generated per build for software his company produces. John’s team publishes charts tracking rendering performance of certain GIS data test cases across builds. This helps alert Development promptly when performance degradation has been introduced, allowing them to address problems promptly. The discussion that followed unearthed […]

© 2003-2017 WOPR Frontier Theme