Bug 1576835 - Update DAMP test writing documentation r=ochameau
☠☠ backed out by 72c7f3199f9a ☠ ☠
authorJulian Descottes <jdescottes@mozilla.com>
Wed, 28 Aug 2019 14:34:19 +0000
changeset 554168 a5bc37462cee0e6f72720336c4fffb68b2c466ec
parent 554167 a68f1e98cf9031f628b972c1579b2ea3bcb6e8e2
child 554169 10ea0cf78737da248968dfd1404d6d2d639123dc
push id2165
push userffxbld-merge
push dateMon, 14 Oct 2019 16:30:58 +0000
treeherdermozilla-release@0eae18af659f [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersochameau
bugs1576835
milestone70.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1576835 - Update DAMP test writing documentation r=ochameau Differential Revision: https://phabricator.services.mozilla.com/D43580
devtools/docs/SUMMARY.md
devtools/docs/backend/actor-e10s-handling.md
devtools/docs/tests/performance-tests.md
devtools/docs/tests/writing-perf-tests-example.md
devtools/docs/tests/writing-perf-tests-tips.md
devtools/docs/tests/writing-perf-tests.md
--- a/devtools/docs/SUMMARY.md
+++ b/devtools/docs/SUMMARY.md
@@ -28,16 +28,18 @@
     * [Chrome mochitests](tests/mochitest-chrome.md)
     * [DevTools mochitests](tests/mochitest-devtools.md)
     * [Node tests](tests/node-tests.md)
     * [Tips](tests/tips.md)
   * [Writing tests](tests/writing-tests.md)
   * [Debugging intermittent failures](tests/debugging-intermittents.md)
   * [Performance tests (DAMP)](tests/performance-tests.md)
     * [Writing a new test](tests/writing-perf-tests.md)
+    * [Example](tests/writing-perf-tests-example.md)
+    * [Avanced tips](tests/writing-perf-tests-tips.md)
 * [Files and directories](files/README.md)
   * [Adding New Files](files/adding-files.md)
 * [Tool Architectures](tools/tools.md)
   * [Inspector](tools/inspector.md)
     * [Panel Architecture](tools/inspector-panel.md)
     * [Highlighters](tools/highlighters.md)
   * [Memory](tools/memory-panel.md)
   * [Debugger](tools/debugger-panel.md)
--- a/devtools/docs/backend/actor-e10s-handling.md
+++ b/devtools/docs/backend/actor-e10s-handling.md
@@ -1,13 +1,13 @@
 # How to handle E10S in actors
 
 In multi-process environments, most devtools actors are created and initialized in the child content process, to be able to access the resources they are exposing to the toolbox. But sometimes, these actors need to access things in the parent process too. Here's why and how.
 
-{% hint style="error" %}
+{% hint style="danger" %}
 
 This documentation page is **deprecated**. `setupInParent` relies on the message manager which is being deprecated. Furthermore, communications between parent and content processes should be avoided for security reasons. If possible, the client should be responsible for calling actors both on the parent and content process.
 
 This page will be removed when all actors relying on this API are removed.
 
 {% endhint %}
 
 ## Use case and examples
--- a/devtools/docs/tests/performance-tests.md
+++ b/devtools/docs/tests/performance-tests.md
@@ -142,17 +142,17 @@ It will only display regressions and imp
 
 DAMP is based on top of a more generic test suite called [Talos](https://wiki.mozilla.org/Buildbot/Talos).
 Talos is a Mozilla test suite to follow all Firefox components performance.
 It is written in Python and here are [the sources](https://dxr.mozilla.org/mozilla-central/source/testing/talos/) in mozilla-central.
 Compared to the other test suites, it isn't run on the cloud, but on dedicated hardware.
 This is to ensure performance numbers are stable over time and between two runs.
 Talos runs various types of tests. More specifically, DAMP is a [Page loader test](https://wiki.mozilla.org/Buildbot/Talos/Tests#Page_Load_Tests).
 The [source code](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/) for DAMP is also in mozilla-central.
-The [main script](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js) contains the implementation of all the tests described in "What does it do?" paragraph.
+See [Writing new performance test](./writing-perf-tests.md) for more information about the implementation of DAMP tests.
 
 ## How to see the performance trends?
 
 You can find the dedicated performance dashboard for DevTools at http://firefox-dev.tools/performance-dashboard. You will find links to trend charts for various tools:
 * [Inspector dashboard](http://firefox-dev.tools/performance-dashboard/tools/inspector.html?days=60&filterstddev=true)
 * [Console dashboard](http://firefox-dev.tools/performance-dashboard/tools/console.html?days=60&filterstddev=true)
 * [Netmonitor dashboard](http://firefox-dev.tools/performance-dashboard/tools/netmonitor.html?days=60&filterstddev=true)
 * [Debugger dashboard](http://firefox-dev.tools/performance-dashboard/tools/debugger.html?days=60&filterstddev=true)
new file mode 100644
--- /dev/null
+++ b/devtools/docs/tests/writing-perf-tests-example.md
@@ -0,0 +1,67 @@
+# Performance test example: performance of click event in the inspector
+
+Let's look at a trivial but practical example and add a simple test to measure the performance of a click in the inspector.
+
+First we create a file under [tests/inspector](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector) since we are writing an inspector test. We call the file `click.js`.
+
+We will use a dummy test document here: `data:text/html,click test document`.
+
+We prepare the imports needed to write the test, from head.js and inspector-helper.js:
+- `testSetup`, `testTeardown`, `openToolbox` and `runTest` from head.js
+- `reloadInspectorAndLog` from inspector-helper.js
+
+The full code for the test looks as follows:
+```
+const {
+  reloadInspectorAndLog,
+} = require("./inspector-helpers");
+
+const {
+  openToolbox,
+  runTest,
+  testSetup,
+  testTeardown,
+} = require("../head");
+
+module.exports = async function() {
+  // Define here your custom document via a data URI:
+  const url = "data:text/html,click test document";
+
+  await testSetup(url);
+  const toolbox = await openToolbox("inspector");
+
+  const inspector = toolbox.getPanel("inspector");
+  const window = inspector.panelWin; // Get inspector's panel window object
+  const body = window.document.body;
+
+  await new Promise(resolve => {
+    const test = runTest("inspector.click");
+    body.addEventListener("click", function () {
+      test.done();
+      resolve();
+    }, { once: true });
+    body.click();
+  });
+
+  // Check if the inspector reload is impacted by click
+  await reloadInspectorAndLog("click", toolbox);
+
+  await testTeardown();
+}
+```
+
+Finally we add an entry in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js):
+```
+  {
+    name: "inspector.click",
+    path: "inspector/click.js",
+    description:
+      "Measure the time to click in the inspector, and reload the inspector",
+  },
+```
+
+Then we can run our test with:
+```
+./mach talos-test --activeTests damp --subtest inspector.click
+```
+
new file mode 100644
--- /dev/null
+++ b/devtools/docs/tests/writing-perf-tests-tips.md
@@ -0,0 +1,41 @@
+# How to write a good performance test?
+
+## Verify that you wait for all asynchronous code
+
+If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script.
+You should ensure that _any_ code ran directly or indirectly by your test is completed.
+You should not only wait for the functions related to the very precise feature you are trying to measure.
+
+This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending,
+it is likely to run in parallel with the next test and increase its variance.
+Noise in the tests makes it hard to detect small regressions.
+
+You should typically wait for:
+* All RDP requests to finish,
+* All DOM Events to fire,
+* Redux action to be dispatched,
+* React updates,
+* ...
+
+
+## Ensure that its results change when regressing/fixing the code or feature you want to watch.
+
+If you are writing the new test to cover a recent regression and you have a patch to fix it, push your test to try without _and_ with the regression fix.
+Look at the try push and confirm that your fix actually reduces the duration of your perf test significantly.
+If you are introducing a test without any patch to improve the performance, try slowing down the code you are trying to cover with a fake slowness like `setTimeout` for asynchronous code, or very slow `for` loop for synchronous code. This is to ensure your test would catch a significant regression.
+
+For our click performance test, we could do this from the inspector codebase:
+```
+window.addEventListener("click", function () {
+
+  // This for loop will fake a hang and should slow down the duration of our test
+  for (let i = 0; i < 100000000; i++) {}
+
+}, true); // pass `true` in order to execute before the test click listener
+```
+
+
+## Keep your test execution short.
+
+Running performance tests is expensive. We are currently running them 25 times for each changeset landed in Firefox.
+Aim to run tests in less than a second on try.
\ No newline at end of file
--- a/devtools/docs/tests/writing-perf-tests.md
+++ b/devtools/docs/tests/writing-perf-tests.md
@@ -1,132 +1,127 @@
 # Writing new performance test
 
 See [Performance tests (DAMP)](performance-tests.md) for an overall description of our performance tests.
-Here, we will describe how to write a new test with an example: track the performance of clicking inside the inspector panel.
+Here, we will describe how to write a new test and register it to run in DAMP.
 
-## Consider modifying existing tests first
+{% hint style="tip" %}
+
+**Reuse existing tests if possible!**
 
 If a `custom` page already exists for the tool you are testing, try to modify the existing `custom` test rather than adding a new individual test.
 
 New individual tests run separately, in new tabs, and make DAMP slower than just modifying existing tests. Complexifying `custom` test pages should also help cover more scenarios and catch more regressions. For those reasons, modifying existing tests should be the preferred way of extending DAMP coverage.
 
 `custom` tests are using complex documents that should stress a particular tool in various ways. They are all named `custom.${tool}` (for instance `custom.inspector`). The test pages for those tests can be found in [pages/custom](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages/custom).
 
 If your test case requires a dedicated document or can't run next to the other tests in the current `custom` test, follow the instructions below to add a new individual test.
 
-## Where is the code for the test?
+{% endhint %}
 
-For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js).
+This page contains the general documentation for writing DAMP tests. See also:
+- [Performance test writing example](./writing-perf-tests-example.html) for a practical example of creating a new test
+- [Performance test writing tips](./writing-perf-tests-tips.html) for detailed tips on how to write a good and efficient test
+
+## Test location
 
-There are two kinds of tests:
-* The first kind is being run against two documents:
-  * "Simple", an empty webpage. This one helps highlighting the load time of panels,
-  * "Complicated", a copy of bild.be, a German newspaper website. This allows us to examine the performance of the tools when inspecting complicated, big websites.
+Tests are located in [testing/talos/talos/tests/devtools/addon/content/tests](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests). You will find subfolders for panels already tested in DAMP (debugger, inspector, …) as well as other subfolders for tests not specific to a given panel (server, toolbox).
 
-  To run your test against these two documents, add it to [this function](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#563-673).
+Tests are isolated in dedicated files. Some examples of tests:
+- [tests/netmonitor/simple.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/netmonitor/simple.js)
+- [tests/inspector/mutations.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector/mutations.js)
+
+## Basic test
+
+The basic skeleton of a test is:
 
-  Look for `_getToolLoadingTests` function. There is one method per tool. Since we want to test how long does it take to click on the inspector, we will find the `inspector` method, and add the new test code there, like this:
-  ```
-  _getToolLoadingTests(url, label, { expectedMessages, expectedSources }) {
-    let tests = {
-      async inspector() {
-        await this.testSetup(url);
-        let toolbox = await this.openToolboxAndLog(label + ".inspector", "inspector");
-        await this.reloadInspectorAndLog(label, toolbox);
+```
+const {
+  testSetup,
+  testTeardown,
+  SIMPLE_URL,
+} = require("../head");
+
+module.exports = async function() {
+  await testSetup(SIMPLE_URL);
+
+  // Run some measures here
 
-        // <== here we are going to add some code to test "click" performance,
-        //     after the inspector is opened and after the page is reloaded.
+  await testTeardown();
+};
+```
+
+* always start the test by calling `testSetup(url)`, with the `url` of the document to use
+* always end the test with `testTeardown()`
+
 
-        await this.closeToolboxAndLog(label + ".inspector");
-        await this.testTeardown();
-      },
-  ```
-* The second kind isn't specific to any document. You can come up with your own test document, or not involve any document if you don't need one.
-  If that better fits your needs, you should introduce an independent test function. Like [this one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#330-348) or [this other one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#350-402). You also have to register the new test function you just introduced in this [`tests` object within `startTest` function](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#863-864).
+## Test documents
+
+DevTools performance heavily depends on the document against which DevTools are opened. There are two "historical" documents you can use for tests for any panel:
+* "Simple", an empty webpage. This one helps highlighting the load time of panels,
+* "Complicated", a copy of bild.be, a German newspaper website. This allows us to examine the performance of the tools when inspecting complicated, big websites.
+
+The URL of those documents are exposed by [tests/head.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/head.js). The Simple page can be found at [testing/talos/talos/tests/devtools/addon/content/pages/simple.html](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages/simple.html). The Complicated page is downloaded via [tooltool](https://wiki.mozilla.org/ReleaseEngineering/Applications/Tooltool) automatically the first time you run the DAMP tests.
 
-  You could also use extremely simple documents specific to your test case like this:
-  ```
-  /**
-   * Measure the time necessary to click in the inspector panel
-   */
-  async _inspectorClickTest() {
-    // Define here your custom document via a data URI:
-    let url = "data:text/html,custom test document";
-    let tab = await this.testSetup(url);
-    let messageManager = tab.linkedBrowser.messageManager;
-    let toolbox = await this.openToolbox("inspector");
+You can create also new test documents under [testing/talos/talos/tests/devtools/addon/content/pages](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages). See the pages in the `custom` subfolder for instance. If you create a document in `pages/custom/mypanel/index.html`, the URL of the document in your tests should be `PAGES_BASE_URL + "custom/mypanel/index.html"`. The constant `PAGES_BASE_URL` is exposed by head.js.
+
+Note that modifying any existing test document will most likely impact the baseline for existing tests.
+
+Finally you can also create very simple test documents using data urls. Test documents don't have to contain any specific markup or script to be valid DAMP test documents, so something as simple as `testSetup("data:text/html,my test document");` is valid.
+
+
+## Test helpers
 
-    // <= Here, you would write your test actions,
-    // after opening the inspector against a custom document
+Helper methods have been extracted in shared modules:
+* [tests/head.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/head.js) for the most common ones
+* tests/{subfolder}/{subfolder}-helpers.js for folder-specific helpers ([example](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector/inspector-helpers.js))
+
+To measure something which is not covered by an existing helper, you should use `runTest`, exposed by head.js.
+
+```
+module.exports = async function() {
+  await testSetup(SIMPLE_URL);
+
+  // Calling `runTest` will immediately start recording your action duration.
+  // You can execute any necessary setup action you don't want to record before calling it.
+  const test = runTest(`mypanel.mytest.mymeasure`);
 
-    await this.closeToolbox();
-    await this.testTeardown();
-  },
-  ...
-  startTest(doneCallback, config) {
-    ...
-    // And you have to register the test in `startTest` function
-    // `tests` object is keyed by test names. So our test is named "inspector.click" here.
-    tests["inspector.click"] = this._inspectorClickTest;
-    ...
-  }
+  await doSomeThings(); // <== Do an action you want to record here
+
+  // Once your action is completed, call `runTest` returned object's `done` method.
+  // It will automatically record the action duration and appear in PerfHerder as a new subtest.
+  // It also creates markers in the profiler so that you can better inspect this action in
+  // profiler.firefox.com.
+  test.done();
+
+  await testTeardown();
+};
+```
 
-  ```
+If your measure is not simply the time spent by an asynchronous call (for instance computing an average, counting things…) there is a lower level helper called `logTestResult` which will directly log a value. See [this example](https://searchfox.org/mozilla-central/rev/325c1a707819602feff736f129cb36055ba6d94f/testing/talos/talos/tests/devtools/addon/content/tests/webconsole/streamlog.js#62).
+
+
+## Test runner
 
-## How to name your test and register it?
+If you need to dive into the internals of the DAMP runner, most of the logic is in [testing/talos/talos/tests/devtools/addon/content/damp.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js).
+
+
+# How to name your test and register it?
+
+If a new test file was created, it needs to be registered in the test suite. To register the new test, add it in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js). This file acts as the manifest for the DAMP test suite.
 
 If your are writing a test executing against Simple and Complicated documents, your test name will look like: `(simple|complicated).${tool-name}.${test-name}`.
 So for our example, it would be `simple.inspector.click` and `complicated.inspector.click`.
 For independent tests that don't use the Simple or Complicated documents, the test name only needs to start with the tool name, if the test is specific to that tool
 For the example, it would be `inspector.click`.
 
-Once you come up with a name, you will have to register your test [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#12-42) and [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#44-71) with a short description of it.
-
-## How to write a performance test?
-
-When you write a performance test, in most cases, you only care about the time it takes to complete a very precise action.
-There is a `runTest` helper method that helps recording a precise action duration:
-```
-// Calling `runTest` will immediately start recording your action duration.
-// You can execute any necessary setup action you don't want to record before calling it.
-let test = this.runTest("my.test.name"); // `runTest` expects the test name as argument
-
-// <== Do an action you want to record here
-
-// Once your action is completed, call `runTest` returned object's `done` method.
-// It will automatically record the action duration and appear in PerfHerder as a new subtest.
-// It also creates markers in the profiler so that you can better inspect this action in
-// profiler.firefox.com.
-test.done();
-```
+In general, the test name should try to match the path of the test file. As you can see in damp-tests.js this naming convention is not consistently followed. We have discrepencies for simple/complicated/custom tests, as well as for webconsole tests. This is largely for historical reasons.
 
-So for our click example it would be:
-```
-async inspector() {
-  await this.testSetup(url);
-  let toolbox = await this.openToolboxAndLog(label + ".inspector", "inspector");
-  await this.reloadInspectorAndLog(label, toolbox);
 
-  let inspector = toolbox.getPanel("inspector");
-  let window = inspector.panelWin; // Get inspector's panel window object
-  let body = window.document.body;
-
-  await new Promise(resolve => {
-    let test = this.runTest("inspector.click");
-    body.addEventListener("click", function () {
-      test.done();
-      resolve();
-    }, { once: true });
-    body.click();
-  });
-}
-```
-
-## How to run your new test?
+# How to run your new test?
 
 You can run any performance test with this command:
 ```
 ./mach talos-test --activeTests damp --subtest ${your-test-name}
 ```
 
 By default, it will run the test 25 times. In order to run it just once, do:
 ```
@@ -143,47 +138,8 @@ Also, you can record a profile while run
 `--geckoProfileEntries` defines the profiler buffer size, which needs to be large while recording performance tests
 
 Once it is done executing, the profile lives in a zip file you have to uncompress like this:
 ```
 unzip testing/mozharness/build/blobber_upload_dir/profile_damp.zip
 ```
 Then you have to open [https://profiler.firefox.com/](https://profiler.firefox.com/) and manually load the profile file that lives here: `profile_damp/page_0_pagecycle_1/cycle_0.profile`
 
-## How to write a good performance test?
-
-### Verify that you wait for all asynchronous code
-
-If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script.
-You should ensure that _any_ code ran directly or indirectly by your test is completed.
-You should not only wait for the functions related to the very precise feature you are trying to measure.
-
-This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending,
-it is likely to run in parallel with the next test and increase its variance.
-Noise in the tests makes it hard to detect small regressions.
-
-You should typically wait for:
-* All RDP requests to finish,
-* All DOM Events to fire,
-* Redux action to be dispatched,
-* React updates,
-* ...
-
-### Ensure that its results change when regressing/fixing the code or feature you want to watch.
-
-If you are writing the new test to cover a recent regression and you have a patch to fix it, push your test to try without _and_ with the regression fix.
-Look at the try push and confirm that your fix actually reduces the duration of your perf test significantly.
-If you are introducing a test without any patch to improve the performance, try slowing down the code you are trying to cover with a fake slowness like `setTimeout` for asynchronous code, or very slow `for` loop for synchronous code. This is to ensure your test would catch a significant regression.
-
-For our click performance test, we could do this from the inspector codebase:
-```
-window.addEventListener("click", function () {
-
-  // This for loop will fake a hang and should slow down the duration of our test
-  for (let i = 0; i < 100000000; i++) {}
-
-}, true); // pass `true` in order to execute before the test click listener
-```
-
-### Keep your test execution short.
-
-Running performance tests is expensive. We are currently running them 25 times for each changeset landed in Firefox.
-Aim to run tests in less than a second on try.