servo: Merge #19523 - Filter out failed test-perf runs (from asajeffrey:test-perf-filter-out-failed-tests); r=avadacatavra
authorAlan Jeffrey <ajeffrey@mozilla.com>
Fri, 05 Jan 2018 09:15:48 -0600
changeset 397997 47a036638f47cc67baa95968c7232a7de1b9e83c
parent 397996 1584f3485995c6d0a0839f107b5bfa47990197f4
child 397998 28009c663a1d10cac6d86eac554b656a5b8d3e4d
push id57602
push userservo-vcs-sync@mozilla.com
push dateFri, 05 Jan 2018 16:37:44 +0000
treeherderautoland@47a036638f47 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersavadacatavra
bugs19523
milestone59.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
servo: Merge #19523 - Filter out failed test-perf runs (from asajeffrey:test-perf-filter-out-failed-tests); r=avadacatavra <!-- Please describe your changes on the following line: --> Google Data Studio is a lot happier if the data that's driving it is already filtered. This PR filters out any failed test runs from the CSV file generated by test-perf. --- <!-- Thank you for contributing to Servo! Please replace each `[ ]` by `[X]` when the step is complete, and replace `__` with appropriate data: --> - [X] `./mach build -d` does not report any errors - [X] `./mach test-tidy` does not report any errors - [X] These changes do not require tests because this is test infrastructure <!-- Also, please make sure that "Allow edits from maintainers" checkbox is checked, so that we can help you if you get stuck somewhere along the way.--> <!-- Pull requests that do not address these steps are welcome, but they will require additional verification as part of the review process. --> Source-Repo: https://github.com/servo/servo Source-Revision: 753e2bc781ee3f9fd219b1f06ac332ec3077675c
servo/etc/ci/performance/runner.py
--- a/servo/etc/ci/performance/runner.py
+++ b/servo/etc/ci/performance/runner.py
@@ -276,20 +276,22 @@ def save_result_csv(results, filename, m
         'requestStart',
         'responseEnd',
         'responseStart',
         'secureConnectionStart',
         'unloadEventEnd',
         'unloadEventStart',
     ]
 
+    successes = [r for r in results if r['domComplete'] != -1]
+
     with open(filename, 'w', encoding='utf-8') as csvfile:
         writer = csv.DictWriter(csvfile, fieldnames)
         writer.writeheader()
-        writer.writerows(results)
+        writer.writerows(successes)
 
 
 def format_result_summary(results):
     failures = list(filter(lambda x: x['domComplete'] == -1, results))
     result_log = """
 ========================================
 Total {total} tests; {suc} succeeded, {fail} failed.