You’ve worked hard to define, develop and execute a performance test on an application to determine its behavior under load. What’s next? The answer is definitely not to generate and send a canned report from your testing tool, or to scoop up averages. Results interpretation and reporting is where a performance tester earns their stripes. We’ll look at some results from actual projects and together puzzle out the essential message in each. This will be an interactive session where a graph is displayed, a little context is provided, and you are asked “what do you see here?” We will form hypotheses, draw tentative conclusions, determine what further information we need to confirm them, and identify key target graphs that give us the best insight on system performance and bottlenecks. We’ll try to codify the analytic steps we went through in the first session, and consider a CAVIAR approach for collecting and evaluating test results: Collecting, Aggregating, Visualizing, Interpreting, Analyzing, And Reporting. Session Takeaways: Training in interpreting results: Data + Analysis = Information. Examples of telling performance test graphs. Advice on Reporting: compel action with your information.