Bug 1700774 - [taskgraph] Move test transforms to a 'transforms/test' directory, r=gbrown
authorAndrew Halberstadt <ahal@mozilla.com>
Mon, 29 Nov 2021 15:52:19 +0000
changeset 600382 197932d5aecb1acade386773cf660c28ff0818e5
parent 600381 54664793b10f51a867ab1de9dde8c21bbf81d43c
child 600383 0d0151af2eceb0db881e9dc9c509a50acc75c599
push id39021
push userncsoregi@mozilla.com
push dateMon, 29 Nov 2021 21:53:24 +0000
treeherdermozilla-central@408b89a2e02c [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1700774 - [taskgraph] Move test transforms to a 'transforms/test' directory, r=gbrown This includes: transforms/tests.py -> transforms/test/__init__.py transforms/raptor.py -> transforms/test/raptor.py This is a pre-cursor to splitting the file up into multiple smaller files under the new 'test' transform directory. Differential Revision: https://phabricator.services.mozilla.com/D132068
--- a/taskcluster/ci/test/kind.yml
+++ b/taskcluster/ci/test/kind.yml
@@ -7,24 +7,24 @@ loader: gecko_taskgraph.loader.test:load
     - build
     - build-signing
     - repackage
     - fetch
     - toolchain
-    - gecko_taskgraph.transforms.tests:transforms
+    - gecko_taskgraph.transforms.test:transforms
     - gecko_taskgraph.transforms.job:transforms
     - gecko_taskgraph.transforms.task:transforms
 # Each stanza in a file pointed to by 'jobs-from' describes a particular test
 # suite or sub-suite. These are processed through the transformations described
 # above to produce a bunch of tasks. See the schema in
-# `taskcluster/gecko_taskgraph/transforms/tests.py` for a description of the fields
+# `taskcluster/gecko_taskgraph/transforms/test/__init__.py` for a description of the fields
 # used in these files.
 # Adding a new test type or running tests on a new platform? Be sure to review
 # https://wiki.mozilla.org/Sheriffing/Job_Visibility_Policy
     - awsy.yml
     - browsertime-desktop.yml
--- a/taskcluster/docs/attributes.rst
+++ b/taskcluster/docs/attributes.rst
@@ -139,17 +139,17 @@ unittest_try_name
 This is the name used to refer to a unit test via try syntax.  It
 may not match ``unittest_suite``.
 The configuration variant the test suite is running with. If set, this usually
 means the tests are running with a special pref enabled. These are defined in
 This is the name used to refer to a talos job via try syntax.
--- a/taskcluster/docs/how-tos.rst
+++ b/taskcluster/docs/how-tos.rst
@@ -81,17 +81,17 @@ platform.  For example:
 .. code-block:: yaml
             linux64/debug: 10
             default: 8
 The full set of available properties is in
-``taskcluster/gecko_taskgraph/transforms/tests.py``.  Some other
+``taskcluster/gecko_taskgraph/transforms/test/__init__.py``.  Some other
 commonly-modified properties are ``max-run-time`` (useful if tests are being
 killed for exceeding maxRunTime) and ``treeherder-symbol``.
 .. note::
     Android tests are also chunked at the mozharness level, so you will need to
     modify the relevant mozharness config, as well.
@@ -99,17 +99,17 @@ Adding a Test Suite
 To add a new test suite, you will need to know the proper mozharness invocation
 for that suite, and which kind it fits into (consult :doc:`kinds`).
 Add a new stanza to ``taskcluster/ci/<kind>/tests.yml``, copying from the other
 stanzas in that file.  The meanings should be clear, but authoritative
 documentation is in
-``taskcluster/gecko_taskgraph/transforms/tests.py`` should you need
+``taskcluster/gecko_taskgraph/transforms/test/__init__.py`` should you need
 it.  The stanza name is the name by which the test will be referenced in try
 Add your new test to a test set in ``test-sets.yml`` in the same directory.  If
 the test should only run on a limited set of platforms, you may need to define
 a new test set and reference that from the appropriate platforms in
 ``test-platforms.yml``.  If you do so, include some helpful comments in
 ``test-sets.yml`` for the next person.
--- a/taskcluster/docs/kinds/test.rst
+++ b/taskcluster/docs/kinds/test.rst
@@ -43,17 +43,17 @@ Variants
 Sometimes we want to run the same tests under a different Firefox context,
 usually this means with a pref set. The concept of ``variants`` was invented to
 handle this use case. A variant is a stanza of configuration that can be merged
 into each test definition. Variants are defined in the `variants.yml`_ file.
 See this file for an up to date list of active variants and the pref(s) they
 Each variant must conform to the
 * **description** (required) - A description explaining what the variant is for
 * **suffix** (required) - A suffix to apply to the task label and treeherder symbol
 * **contact** - Person to contact with questions around cost / capacity planning or
   relative priority.
 * **when** - A `json-e`_ expression that must evaluate to ``true`` for the variant
   to be applied. The ``task`` definition is passed in as context
 * **replace** - A dictionary that will overwrite keys in the task definition
@@ -118,20 +118,20 @@ 2. Build - Bits of information that desc
 3. Runtime - Bits of information that describe the configured state of Firefox.
    This includes things like prefs and environment variables. Note that tasks
    should only set runtime configuration via the variants system (see
 Test ``settings`` are available in the ``task.extra.test-setting`` object in
 all test tasks. They are defined by the
-:py:func:`~gecko_taskgraph.transforms.tests.set_test_setting` transform
+:py:func:`~gecko_taskgraph.transforms.test.set_test_setting` transform
 The full schema is defined in the
 Setting Hash
 In addition to the three top-level objects, there is also a ``_hash`` key which
 contains a hash of the rest of the setting object. This is a convenient way for
 consumers to group or compare tasks that run under the same setting.
--- a/taskcluster/docs/transforms.rst
+++ b/taskcluster/docs/transforms.rst
@@ -101,18 +101,19 @@ Test Descriptions
 Test descriptions specify how to run a unittest or talos run.  They aim to
 describe this abstractly, although in many cases the unique nature of
 invocation on different platforms leaves a lot of specific behavior in the test
 description, divided by ``by-test-platform``.
 Test descriptions are validated to conform to the schema in
-``taskcluster/gecko_taskgraph/transforms/tests.py``.  This schema is extensively
-documented and is a the primary reference for anyone modifying tests.
+``taskcluster/gecko_taskgraph/transforms/test/__init__.py``.  This schema is
+extensively documented and is a the primary reference for anyone modifying
 The output of ``tests.py`` is a task description.  Test dependencies are
 produced in the form of a dictionary mapping dependency name to task label.
 Job Descriptions
 A job description says what to run in the task.  It is a combination of a
--- a/taskcluster/gecko_taskgraph/test/python.ini
+++ b/taskcluster/gecko_taskgraph/test/python.ini
@@ -12,17 +12,17 @@ subsuite = taskgraph
rename from taskcluster/gecko_taskgraph/test/test_transforms_tests.py
rename to taskcluster/gecko_taskgraph/test/test_transforms_test.py
--- a/taskcluster/gecko_taskgraph/test/test_transforms_tests.py
+++ b/taskcluster/gecko_taskgraph/test/test_transforms_test.py
@@ -6,17 +6,17 @@ Tests for the 'tests.py' transforms
 import hashlib
 import json
 from functools import partial
 import mozunit
 import pytest
-from gecko_taskgraph.transforms import tests as test_transforms
+from gecko_taskgraph.transforms import test as test_transforms
 def make_test_task():
     """Create a test task definition with required default values."""
     def inner(**extra):
         task = {
--- a/taskcluster/gecko_taskgraph/transforms/job/mozharness_test.py
+++ b/taskcluster/gecko_taskgraph/transforms/job/mozharness_test.py
@@ -11,17 +11,17 @@ from voluptuous import Required, Optiona
 from gecko_taskgraph.util.taskcluster import get_artifact_url
 from gecko_taskgraph.transforms.job import (
 from gecko_taskgraph.util.schema import Schema
 from gecko_taskgraph.util.taskcluster import get_artifact_path
-from gecko_taskgraph.transforms.tests import test_description_schema, normpath
+from gecko_taskgraph.transforms.test import test_description_schema, normpath
 from gecko_taskgraph.transforms.job.common import support_vcs_checkout
rename from taskcluster/gecko_taskgraph/transforms/tests.py
rename to taskcluster/gecko_taskgraph/transforms/test/__init__.py
--- a/taskcluster/gecko_taskgraph/transforms/tests.py
+++ b/taskcluster/gecko_taskgraph/transforms/test/__init__.py
@@ -649,17 +649,17 @@ def resolve_keys(config, tasks):
         yield task
 def setup_raptor(config, tasks):
     """Add options that are specific to raptor jobs (identified by suite=raptor)"""
-    from gecko_taskgraph.transforms.raptor import transforms as raptor_transforms
+    from gecko_taskgraph.transforms.test.raptor import transforms as raptor_transforms
     for task in tasks:
         if task["suite"] != "raptor":
             yield task
         yield from raptor_transforms(config, [task])
rename from taskcluster/gecko_taskgraph/transforms/raptor.py
rename to taskcluster/gecko_taskgraph/transforms/test/raptor.py
--- a/taskcluster/gecko_taskgraph/transforms/raptor.py
+++ b/taskcluster/gecko_taskgraph/transforms/test/raptor.py
@@ -7,17 +7,17 @@ from copy import deepcopy
 from voluptuous import (
 from gecko_taskgraph.transforms.base import TransformSequence
-from gecko_taskgraph.transforms.tests import test_description_schema
+from gecko_taskgraph.transforms.test import test_description_schema
 from gecko_taskgraph.util.schema import optionally_keyed_by, resolve_keyed_by, Schema
 from gecko_taskgraph.util.treeherder import split_symbol, join_symbol
 transforms = TransformSequence()
 raptor_description_schema = Schema(
         # Raptor specific configs.
--- a/tools/code-coverage/docs/index.rst
+++ b/tools/code-coverage/docs/index.rst
@@ -148,17 +148,17 @@ Debugging Failing Tests on the Try Serve
 When code coverage is run through a push to try, all the data that is
 created is ingested by ActiveData and processed into a different data
 format for analysis. Anytime a code coverage run generates \*.gcda and
 \*.gcno files, ActiveData starts working. Now, sometimes, a test will
 permanently fail when it is running on a build that is instrumented with
 GCOV. To debug these issues without overloading ActiveData with garbage
 coverage data, open the file
-`taskcluster/gecko_taskgraph/transforms/tests.py <https://searchfox.org/mozilla-central/source/taskcluster/gecko_taskgraph/transforms/tests.py#516>`__
+`taskcluster/gecko_taskgraph/transforms/test/__init__.py <https://searchfox.org/mozilla-central/source/taskcluster/gecko_taskgraph/transforms/test/__init__.py#516>`__
 and add the following line,
 .. code:: python
    test['mozharness'].setdefault('extra-options', []).append('--disable-ccov-upload')
 right after this line of code:
--- a/tools/lint/rejected-words.yml
+++ b/tools/lint/rejected-words.yml
@@ -303,17 +303,17 @@ avoid-blacklist-and-whitelist:
         - security/sandbox/linux/Sandbox.h
         - services/automation/ServicesAutomation.jsm
         - services/fxaccounts/FxAccountsCommon.js
         - services/fxaccounts/FxAccounts.jsm
         - services/sync/modules/engines/addons.js
         - taskcluster/ci/docker-image/kind.yml
         - taskcluster/gecko_taskgraph/actions/create_interactive.py
         - taskcluster/gecko_taskgraph/target_tasks.py
-        - taskcluster/gecko_taskgraph/transforms/tests.py
+        - taskcluster/gecko_taskgraph/transforms/test/__init__.py
         - taskcluster/gecko_taskgraph/try_option_syntax.py
         - taskcluster/gecko_taskgraph/util/schema.py
         - taskcluster/test/test_mach_try_auto.py
         - testing/condprofile/condprof/client.py
         - testing/condprofile/condprof/tests/profile/prefs.js
         - testing/condprofile/condprof/tests/test_client.py
         - testing/firefox-ui/tests/functional/safebrowsing/test_initial_download.py
         - testing/marionette/client/marionette_driver/wait.py