Backed out changeset 86dd98e41d38 (bug 1178754)
authorCarsten "Tomcat" Book <cbook@mozilla.com>
Tue, 30 Jun 2015 15:57:09 +0200
changeset 250699 8c8d3ad9a6c568a89a8d250cc57bd0caf8c25005
parent 250698 1a50ffaab0b6b127ba42cefae3c75da7aae77571
child 250700 44de6b9eeb77813cc0254bf6e7e763ab0c5a3714
push id61628
push usercbook@mozilla.com
push dateTue, 30 Jun 2015 13:59:08 +0000
treeherdermozilla-inbound@44de6b9eeb77 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
bugs1178754
milestone42.0a1
backs out86dd98e41d38743eb94b19a26a4f9566eb5117d8
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Backed out changeset 86dd98e41d38 (bug 1178754)
testing/web-platform/harness/README.rst
testing/web-platform/harness/docs/usage.rst
testing/web-platform/harness/test/test.cfg.example
testing/web-platform/harness/wptrunner/browsers/servodriver.py
testing/web-platform/harness/wptrunner/executors/executorservo.py
testing/web-platform/harness/wptrunner/metadata.py
testing/web-platform/harness/wptrunner/testloader.py
testing/web-platform/harness/wptrunner/update/sync.py
--- a/testing/web-platform/harness/README.rst
+++ b/testing/web-platform/harness/README.rst
@@ -20,37 +20,28 @@ After installation, the command ``wptrun
 the tests.
 
 The ``wptrunner`` command  takes multiple options, of which the
 following are most significant:
 
 ``--product`` (defaults to `firefox`)
   The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
 
-``--binary`` (required if product is `firefox` or `servo`)
+``--binary`` (required)
   The path to a binary file for the product (browser) to test against.
 
-``--webdriver-binary`` (required if product is `chrome`)
-  The path to a `*driver` binary; e.g., a `chromedriver` binary.
-
-``--certutil-binary`` (required if product is `firefox` [#]_)
-  The path to a `certutil` binary (for tests that must be run over https).
-
 ``--metadata`` (required)
   The path to a directory containing test metadata. [#]_
 
 ``--tests`` (required)
   The path to a directory containing a web-platform-tests checkout.
 
 ``--prefs-root`` (required only when testing a Firefox binary)
   The path to a directory containing Firefox test-harness preferences. [#]_
 
-.. [#] The ``--certutil-binary`` option is required when the product is
-   ``firefox`` unless ``--ssl-type=none`` is specified.
-
 .. [#] The ``--metadata`` path is to a directory that contains:
 
   * a ``MANIFEST.json`` file (the web-platform-tests documentation has
     instructions on generating this file); and
   * (optionally) any expectation files (see below)
 
 .. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``.
 
@@ -60,39 +51,36 @@ list them.
 -------------------------------
 Example: How to start wptrunner
 -------------------------------
 
 To test a Firefox Nightly build in an OS X environment, you might start
 wptrunner using something similar to the following example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
-    --binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
-    --certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
-    --prefs-root=~/mozilla-central/testing/profiles
+  --binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \
+  --prefs-root=~/mozilla-central/testing/profiles
 
 And to test a Chromium build in an OS X environment, you might start
 wptrunner using something similar to the following example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
-    --binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
-    --webdriver-binary=/usr/local/bin/chromedriver --product=chrome
+  --binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
+  --product=chrome
 
 -------------------------------------
 Example: How to run a subset of tests
 -------------------------------------
 
 To restrict a test run just to tests in a particular web-platform-tests
-subdirectory, specify the directory name in the positional arguments after
-the options; for example, run just the tests in the `dom` subdirectory::
+subdirectory, use ``--include`` with the directory name; for example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
-    --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
-    --prefs-root=/path/to/testing/profiles \
-    dom
+  --binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \
+  --include=dom
 
 Output
 ~~~~~~
 
 By default wptrunner just dumps its entire output as raw JSON messages
 to stdout. This is convenient for piping into other tools, but not ideal
 for humans reading the output.
 
@@ -102,19 +90,18 @@ either the path for a file to write the 
 "`-`" (a hyphen) to write the `mach`-formatted output to stdout.
 
 When using ``--log-mach``, output of the full raw JSON log is still
 available, from the ``--log-raw`` option. So to output the full raw JSON
 log to a file and a human-readable summary to stdout, you might start
 wptrunner using something similar to the following example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
-    --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
-    --prefs-root=/path/to/testing/profiles \
-    --log-raw=output.log --log-mach=-
+  --binary=/path/to/firefox --prefs-root=/path/to/testing/profiles
+  --log-raw=output.log --log-mach=-
 
 Expectation Data
 ~~~~~~~~~~~~~~~~
 
 wptrunner is designed to be used in an environment where it is not
 just necessary to know which tests passed, but to compare the results
 between runs. For this reason it is possible to store the results of a
 previous run in a set of ini-like "expectation files". This format is
--- a/testing/web-platform/harness/docs/usage.rst
+++ b/testing/web-platform/harness/docs/usage.rst
@@ -51,37 +51,28 @@ Running the Tests
 -----------------
 
 A test run is started using the ``wptrunner`` command.  The command
 takes multiple options, of which the following are most significant:
 
 ``--product`` (defaults to `firefox`)
   The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
 
-``--binary`` (required if product is `firefox` or `servo`)
+``--binary`` (required)
   The path to a binary file for the product (browser) to test against.
 
-``--webdriver-binary`` (required if product is `chrome`)
-  The path to a `*driver` binary; e.g., a `chromedriver` binary.
-
-``--certutil-binary`` (required if product is `firefox` [#]_)
-  The path to a `certutil` binary (for tests that must be run over https).
-
 ``--metadata`` (required only when not `using default paths`_)
   The path to a directory containing test metadata. [#]_
 
 ``--tests`` (required only when not `using default paths`_)
   The path to a directory containing a web-platform-tests checkout.
 
 ``--prefs-root`` (required only when testing a Firefox binary)
   The path to a directory containing Firefox test-harness preferences. [#]_
 
-.. [#] The ``--certutil-binary`` option is required when the product is
-   ``firefox`` unless ``--ssl-type=none`` is specified.
-
 .. [#] The ``--metadata`` path is to a directory that contains:
 
   * a ``MANIFEST.json`` file (the web-platform-tests documentation has
     instructions on generating this file)
   * (optionally) any expectation files (see :ref:`wptupdate-label`)
 
 .. [#] Example ``--prefs-root`` value: ``~/mozilla-central/testing/profiles``.
 
@@ -93,40 +84,36 @@ The following examples show how to start
 ------------------
 Starting wptrunner
 ------------------
 
 To test a Firefox Nightly build in an OS X environment, you might start
 wptrunner using something similar to the following example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
-    --binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
-    --certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
+    --binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \
     --prefs-root=~/mozilla-central/testing/profiles
 
-
 And to test a Chromium build in an OS X environment, you might start
 wptrunner using something similar to the following example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
     --binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
-    --webdriver-binary=/usr/local/bin/chromedriver --product=chrome
+    --product=chrome
 
 --------------------
 Running test subsets
 --------------------
 
 To restrict a test run just to tests in a particular web-platform-tests
-subdirectory, specify the directory name in the positional arguments after
-the options; for example, run just the tests in the `dom` subdirectory::
+subdirectory, use ``--include`` with the directory name; for example::
 
   wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
-    --binary=/path/to/firefox --certutil-binary=/path/to/certutil \
-    --prefs-root=/path/to/testing/profiles \
-    dom
+    --binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \
+    --include=dom
 
 -------------------
 Running in parallel
 -------------------
 
 To speed up the testing process, use the ``--processes`` option to have
 wptrunner run multiple browser instances in parallel. For example, to
 have wptrunner attempt to run tests against with six browser instances
--- a/testing/web-platform/harness/test/test.cfg.example
+++ b/testing/web-platform/harness/test/test.cfg.example
@@ -3,18 +3,14 @@ tests=/path/to/web-platform-tests/
 metadata=/path/to/web-platform-tests/
 ssl-type=none
 
 # [firefox]
 # binary=/path/to/firefox
 # prefs-root=/path/to/gecko-src/testing/profiles/
 
 # [servo]
-# binary=/path/to/servo-src/target/release/servo
-# exclude=testharness # Because it needs a special testharness.js
-
-# [servodriver]
-# binary=/path/to/servo-src/target/release/servo
+# binary=/path/to/servo-src/components/servo/target/servo
 # exclude=testharness # Because it needs a special testharness.js
 
 # [chrome]
 # binary=/path/to/chrome
-# webdriver-binary=/path/to/chromedriver
+# webdriver-binary=/path/to/chromedriver
\ No newline at end of file
--- a/testing/web-platform/harness/wptrunner/browsers/servodriver.py
+++ b/testing/web-platform/harness/wptrunner/browsers/servodriver.py
@@ -37,17 +37,17 @@ def check_args(**kwargs):
     require_arg(kwargs, "binary")
 
 
 def browser_kwargs(**kwargs):
     return {"binary": kwargs["binary"],
             "debug_info": kwargs["debug_info"]}
 
 
-def executor_kwargs(test_type, server_config, cache_manager, run_info_data, **kwargs):
+def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
     rv = base_executor_kwargs(test_type, server_config,
                               cache_manager, **kwargs)
     return rv
 
 
 def env_options():
     return {"host": "web-platform.test",
             "bind_hostname": "true",
--- a/testing/web-platform/harness/wptrunner/executors/executorservo.py
+++ b/testing/web-platform/harness/wptrunner/executors/executorservo.py
@@ -57,19 +57,18 @@ class ServoTestharnessExecutor(ProcessTe
         except OSError:
             pass
         ProcessTestExecutor.teardown(self)
 
     def do_test(self, test):
         self.result_data = None
         self.result_flag = threading.Event()
 
-        debug_args, command = browser_command(self.binary,
-            ["--cpu", "--hard-fail", "-u", "Servo/wptrunner", "-z", self.test_url(test)],
-            self.debug_info)
+        debug_args, command = browser_command(self.binary, ["--cpu", "--hard-fail", "-z", self.test_url(test)],
+                                              self.debug_info)
 
         self.command = command
 
         if self.pause_after_test:
             self.command.remove("-z")
 
         self.command = debug_args + self.command
 
@@ -95,28 +94,25 @@ class ServoTestharnessExecutor(ProcessTe
             if not self.interactive and not self.pause_after_test:
                 wait_timeout = timeout + 5
                 self.result_flag.wait(wait_timeout)
             else:
                 wait_timeout = None
                 self.proc.wait()
 
             proc_is_running = True
-
-            if self.result_flag.is_set():
-                if self.result_data is not None:
-                    self.result_data["test"] = test.url
-                    result = self.convert_result(test, self.result_data)
-                else:
-                    self.proc.wait()
+            if self.result_flag.is_set() and self.result_data is not None:
+                self.result_data["test"] = test.url
+                result = self.convert_result(test, self.result_data)
+            else:
+                if self.proc.poll() is not None:
                     result = (test.result_cls("CRASH", None), [])
                     proc_is_running = False
-            else:
-                result = (test.result_cls("TIMEOUT", None), [])
-
+                else:
+                    result = (test.result_cls("TIMEOUT", None), [])
 
             if proc_is_running:
                 if self.pause_after_test:
                     self.logger.info("Pausing until the browser exits")
                     self.proc.wait()
                 else:
                     self.proc.kill()
         except KeyboardInterrupt:
@@ -185,18 +181,18 @@ class ServoRefTestExecutor(ProcessTestEx
         os.rmdir(self.tempdir)
         ProcessTestExecutor.teardown(self)
 
     def screenshot(self, test):
         full_url = self.test_url(test)
 
         with TempFilename(self.tempdir) as output_path:
             self.command = [self.binary, "--cpu", "--hard-fail", "--exit",
-                            "-u", "Servo/wptrunner", "-Z", "disable-text-aa",
-                            "--output=%s" % output_path, full_url]
+                            "-Z", "disable-text-aa", "--output=%s" % output_path,
+                            full_url]
 
             env = os.environ.copy()
             env["HOST_FILE"] = self.hosts_path
 
             self.proc = ProcessHandler(self.command,
                                        processOutputLine=[self.on_output],
                                        env=env)
 
--- a/testing/web-platform/harness/wptrunner/metadata.py
+++ b/testing/web-platform/harness/wptrunner/metadata.py
@@ -148,42 +148,27 @@ def update_from_logs(manifests, *log_fil
         for tree in manifest_expected.itervalues():
             for test in tree.iterchildren():
                 for subtest in test.iterchildren():
                     subtest.coalesce_expected()
                 test.coalesce_expected()
 
     return expected_map
 
-def directory_manifests(metadata_path):
-    rv = []
-    for dirpath, dirname, filenames in os.walk(metadata_path):
-        if "__dir__.ini" in filenames:
-            rel_path = os.path.relpath(dirpath, metadata_path)
-            rv.append(os.path.join(rel_path, "__dir__.ini"))
-    return rv
 
 def write_changes(metadata_path, expected_map):
     # First write the new manifest files to a temporary directory
     temp_path = tempfile.mkdtemp(dir=os.path.split(metadata_path)[0])
     write_new_expected(temp_path, expected_map)
 
-    # Keep all __dir__.ini files (these are not in expected_map because they
-    # aren't associated with a specific test)
-    keep_files = directory_manifests(metadata_path)
-
     # Copy all files in the root to the temporary location since
     # these cannot be ini files
-    keep_files.extend(item for item in os.listdir(metadata_path) if
-                      not os.path.isdir(os.path.join(metadata_path, item)))
-
+    keep_files = [item for item in os.listdir(metadata_path) if
+                  not os.path.isdir(os.path.join(metadata_path, item))]
     for item in keep_files:
-        dest_dir = os.path.dirname(os.path.join(temp_path, item))
-        if not os.path.exists(dest_dir):
-            os.makedirs(dest_dir)
         shutil.copyfile(os.path.join(metadata_path, item),
                         os.path.join(temp_path, item))
 
     # Then move the old manifest files to a new location
     temp_path_2 = metadata_path + str(uuid.uuid4())
     os.rename(metadata_path, temp_path_2)
     # Move the new files to the destination location and remove the old files
     os.rename(temp_path, metadata_path)
--- a/testing/web-platform/harness/wptrunner/testloader.py
+++ b/testing/web-platform/harness/wptrunner/testloader.py
@@ -491,17 +491,17 @@ class TestLoader(object):
         inherit_metadata = self.load_dir_metadata(test_manifest, metadata_path, test_path)
         test_metadata = manifestexpected.get_manifest(
             metadata_path, test_path, test_manifest.url_base, self.run_info)
         return inherit_metadata, test_metadata
 
     def iter_tests(self):
         manifest_items = []
 
-        for manifest in sorted(self.manifests.keys()):
+        for manifest in self.manifests.keys():
             manifest_iter = iterfilter(self.manifest_filters,
                                        manifest.itertypes(*self.test_types))
             manifest_items.extend(manifest_iter)
 
         if self.chunker is not None:
             manifest_items = self.chunker(manifest_items)
 
         for test_path, tests in manifest_items:
--- a/testing/web-platform/harness/wptrunner/update/sync.py
+++ b/testing/web-platform/harness/wptrunner/update/sync.py
@@ -119,35 +119,34 @@ class GetSyncTargetCommit(Step):
 
         state.sync_tree.checkout(state.sync_commit.sha1, state.local_branch, force=True)
         self.logger.debug("New base commit is %s" % state.sync_commit.sha1)
 
 
 class LoadManifest(Step):
     """Load the test manifest"""
 
-    provides = ["test_manifest", "manifest_path"]
+    provides = ["test_manifest"]
 
     def create(self, state):
-        from manifest import manifest
-        state.manifest_path = os.path.join(state.metadata_path, "MANIFEST.json")
-        # Conservatively always rebuild the manifest when doing a sync
-        old_manifest = manifest.load(state.tests_path, state.manifest_path)
-        state.test_manifest = manifest.Manifest(old_manifest.rev, "/")
+        state.test_manifest = testloader.ManifestLoader(state.tests_path).load_manifest(
+            state.tests_path, state.metadata_path,
+        )
 
 
 class UpdateManifest(Step):
     """Update the manifest to match the tests in the sync tree checkout"""
 
     provides = ["initial_rev"]
     def create(self, state):
         from manifest import manifest, update
-        state.initial_rev = state.test_manifest.rev
-        update.update(state.sync["path"], "/", state.test_manifest)
-        manifest.write(state.test_manifest, state.manifest_path)
+        test_manifest = state.test_manifest
+        state.initial_rev = test_manifest.rev
+        update.update(state.sync["path"], "/", test_manifest)
+        manifest.write(test_manifest, os.path.join(state.metadata_path, "MANIFEST.json"))
 
 
 class CopyWorkTree(Step):
     """Copy the sync tree over to the destination in the local tree"""
 
     def create(self, state):
         copy_wpt_tree(state.sync_tree,
                       state.tests_path)