Bug 1471767 - taskcluster documentation fixes, r=dustin
authorNick Thomas <nthomas@mozilla.com>
Wed, 27 Jun 2018 21:48:10 +1200
changeset 424340 e3d9b7024efa256b0835b0acdd807d75121bdbdf
parent 424339 a57560dbc9acd4315bb929b65463fe6ba1ce53a0
child 424341 4b14023b1a14b91260f62ff2ebbd76056bf91bd7
push id104778
push usershindli@mozilla.com
push dateThu, 28 Jun 2018 23:25:48 +0000
treeherdermozilla-inbound@226fffcc9736 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1471767 - taskcluster documentation fixes, r=dustin Assorted fixes from trawling the sphinx logs - malformed formatting, broken references, leftovers from renaming action-task to action-callback and removing yaml-templates, docstring fixes to make sphinx happier, and typos. MozReview-Commit-ID: 6jUOljdLoE2
--- a/taskcluster/docs/caches.rst
+++ b/taskcluster/docs/caches.rst
@@ -82,17 +82,17 @@ Workspace Caches
    These caches (of various names typically ending with ``workspace``)
    contain state to be shared between task invocations. Use cases are
    dependent on the task.
    Tooltool invocations should use this cache. Tooltool will store files here
    indexed by their hash.
    This cache name pattern is reserved for use with ``run-task`` and must only
    be used by ``run-task``
 ``tooltool-cache`` (deprecated)
    Legacy location for tooltool files. Use the per-level one instead.
--- a/taskcluster/docs/docker-images.rst
+++ b/taskcluster/docs/docker-images.rst
@@ -22,61 +22,66 @@ Dockerfile in each folder.
 Images could either be an image intended for pushing to a docker registry, or
 one that is meant either for local testing or being built as an artifact when
 pushed to vcs.
 Task Images (build-on-push)
-Images can be uploaded as a task artifact, [indexed](#task-image-index-namespace) under
+Images can be uploaded as a task artifact, :ref:`indexed <task-image-index-namespace>` under
 a given namespace, and used in other tasks by referencing the task ID.
 Important to note, these images do not require building and pushing to a docker registry, and are
 built per push (if necessary) and uploaded as task artifacts.
-The decision task that is run per push will [determine](#context-directory-hashing)
+The decision task that is run per push will :ref:`determine <context-directory-hashing>`
 if the image needs to be built based on the hash of the context directory and if the image
 exists under the namespace for a given branch.
 As an additional convenience, and a precaution to loading images per branch, if an image
 has been indexed with a given context hash for mozilla-central, any tasks requiring that image
 will use that indexed task.  This is to ensure there are not multiple images built/used
 that were built from the same context. In summary, if the image has been built for mozilla-central,
 pushes to any branch will use that already built image.
 To use within an in-tree task definition, the format is:
-  type: 'task-image'
-  path: 'public/image.tar.zst'
-  taskId: '{{#task_id_for_image}}builder{{/task_id_for_image}}'
+.. code-block:: yaml
+    image:
+        type: 'task-image'
+        path: 'public/image.tar.zst'
+        taskId: '<task_id_for_image_builder>'
+.. _context-directory-hashing:
 Context Directory Hashing
 Decision tasks will calculate the sha256 hash of the contents of the image
 directory and will determine if the image already exists for a given branch and hash
 or if a new image must be built and indexed.
 Note: this is the contents of *only* the context directory, not the
 image contents.
 The decision task will:
 1. Recursively collect the paths of all files within the context directory
 2. Sort the filenames alphabetically to ensure the hash is consistently calculated
-3. Generate a sha256 hash of the contents of each file.
-4. All file hashes will then be combined with their path and used to update the hash
-of the context directory.
+3. Generate a sha256 hash of the contents of each file
+4. All file hashes will then be combined with their path and used to update the
+   hash of the context directory
 This ensures that the hash is consistently calculated and path changes will result
 in different hashes being generated.
+.. _task-image-index-namespace:
 Task Image Index Namespace
 Images that are built on push and uploaded as an artifact of a task will be indexed under the
 following namespaces.
 * gecko.cache.level-{level}.docker.v2.{name}.hash.{digest}
 * gecko.cache.level-{level}.docker.v2.{name}.latest
@@ -100,17 +105,17 @@ Example:
 .. code-block:: none
     image: taskcluster/decision:0.1.10@sha256:c5451ee6c655b3d97d4baa3b0e29a5115f23e0991d4f7f36d2a8f793076d6854
 Each image has a repo digest, an image hash, and a version. The repo digest is
 stored in the ``HASH`` file in the image directory  and used to refer to the
 image as above.  The version is in ``VERSION``.  The image hash is used in
-`chain-of-trust verification <http://scriptworker.readthedocs.io/en/latest/chain_of_trust.html>`
+`chain-of-trust verification <https://scriptworker.readthedocs.io/en/latest/chain_of_trust.html>`_
 in `scriptworker <https://github.com/mozilla-releng/scriptworker>`_.
 The version file only serves to provide convenient names, such that old
 versions are easy to discover in the registry (and ensuring old versions aren't
 deleted by garbage-collection).
 Each image directory also has a ``REGISTRY``, defaulting to the ``REGISTRY`` in
 the ``taskcluster/docker`` directory, and specifying the image registry to
@@ -163,17 +168,17 @@ Landing docker registry images takes a l
 Once a new version of the image has been built and tested locally, push it to
 the docker registry and make note of the resulting repo digest.  Put this value
 in the ``HASH`` file, and update any references to the image in the code or
 task definitions.
 The change is now safe to use in Try pushes.  However, if the image is used in
 building releases then it is *not* safe to land to an integration branch until
-the whitelists in `scriptworker
+the whitelists in `scriptworker's constants.py
 have also been updated. These whitelists use the image hash, not the repo
 Special Dockerfile Syntax
 Dockerfile syntax has been extended to allow *any* file from the
--- a/taskcluster/docs/how-tos.rst
+++ b/taskcluster/docs/how-tos.rst
@@ -52,17 +52,17 @@ 4. When you are satisfied with the chang
 Hacking Actions
 If you are working on an action task and wish to test it out locally, use the
 ``./mach taskgraph test-action-callback`` command:
    .. code-block:: none
-        ./mach taskgraph test-action-task \
+        ./mach taskgraph test-action-callback \
             --task-id I4gu9KDmSZWu3KHx6ba6tw --task-group-id sMO4ybV9Qb2tmcI1sDHClQ \
             --input input.yml hello_world_action
 This invocation will run the hello world callback with the given inputs and
 print any created tasks to stdout, rather than actually creating them.
 Common Changes
--- a/taskcluster/docs/kinds.rst
+++ b/taskcluster/docs/kinds.rst
@@ -154,21 +154,17 @@ Tasks of the ``docker-image`` kind build
 Docker tasks run.
 The tasks to generate each docker image have predictable labels:
 Docker images are built from subdirectories of ``taskcluster/docker``, using
 ``docker build``.  There is currently no capability for one Docker image to
 depend on another in-tree docker image, without uploading the latter to a
-Docker repository
-The task definition used to create the image-building tasks is given in
-``image.yml`` in the kind directory, and is interpreted as a :doc:`YAML
-Template <yaml-templates>`.
+Docker repository.
 Balrog tasks are responsible for submitting metadata to our update server (Balrog).
 They are typically downstream of a beetmover job that moves signed MARs somewhere
 (eg: beetmover and beetmover-l10n for releases, beetmover-repackage for nightlies).
--- a/taskcluster/docs/loading.rst
+++ b/taskcluster/docs/loading.rst
@@ -11,17 +11,17 @@ a Python function like::
 The ``kind`` is the name of the kind; the configuration for that kind
 named this class.
 The ``path`` is the path to the configuration directory for the kind. This
 can be used to load extra data, templates, etc.
 The ``parameters`` give details on which to base the task generation. See
-:ref:`parameters` for details.
+:doc:`parameters` for details.
 At the time this method is called, all kinds on which this kind depends
 (that is, specified in the ``kind-dependencies`` key in ``config``)
 have already loaded their tasks, and those tasks are available in
 the list ``loaded_tasks``.
 The return value is a list of inputs to the transforms listed in the kind's
 ``transforms`` property. The specific format for the input depends on the first
--- a/taskcluster/docs/mach.rst
+++ b/taskcluster/docs/mach.rst
@@ -17,17 +17,17 @@ graph-generation process and output the 
 ``mach taskgraph target-graph``
    Get the target task graph
 ``mach taskgraph optimized``
    Get the optimized task graph
 ``mach taskgraph morphed``
-   Get the morhped task graph
+   Get the morphed task graph
 See :doc:`how-tos` for further practical tips on debugging task-graph mechanics
 Each of these commands takes an optional ``--parameters`` argument giving a file
--- a/taskcluster/docs/optimization-schedules.rst
+++ b/taskcluster/docs/optimization-schedules.rst
@@ -57,33 +57,33 @@ Specification
 Components are defined as either inclusive or exclusive in :py:mod:`mozbuild.schedules`.
 File Annotation
 Files are annotated with their affected components in ``moz.build`` files with stanzas like ::
     with Files('**/*.py'):
         SCHEDULES.inclusive += ['py-lint']
 for inclusive components and ::
     with Files('*gradle*'):
         SCHEDULES.exclusive = ['android']
 for exclusive components.
 Note the use of ``+=`` for inclusive compoenents (as this is adding to the existing set of affected components) but ``=`` for exclusive components (as this is resetting the affected set to something smaller).
 For cases where an inclusive component is affected exclusively (such as the python-lint configuration in the example above), that component can be assigned to ``SCHEDULES.exclusive``::
     with Files('**/pep8rc'):
         SCHEDULES.exclusive = ['py-lint']
-If multiple stanzas set ``SCHEDULES.exclusive``, the last one will take precedence.  Thus the following will set ``SCHEDULES.exclusive`` to ``hpux`` for all files except those under ``docs/``.
+If multiple stanzas set ``SCHEDULES.exclusive``, the last one will take precedence.  Thus the following
+will set ``SCHEDULES.exclusive`` to ``hpux`` for all files except those under ``docs/``. ::
     with Files('**'):
         SCHEDULES.exclusive = ['hpux']
     with Files('**/docs'):
         SCHEDULES.exclusive = ['docs']
 Task Annotation
--- a/taskcluster/docs/taskcluster-config.rst
+++ b/taskcluster/docs/taskcluster-config.rst
@@ -27,9 +27,9 @@ Part of the landing process is for someo
 You can test your patches with something like this, assuming ``.`` is a checkout of the `ci-configuration`_ repository containing your changes:
 .. code-block: shell
   ci-admin diff --ci-configuration-directory .
 .. _ci-configuration: https://hg.mozilla.org/build/ci-configuration/file
-.. _ci-configuration: https://hg.mozilla.org/build/ci-admin/file
+.. _ci-admin: https://hg.mozilla.org/build/ci-admin/file
--- a/taskcluster/docs/taskgraph.rst
+++ b/taskcluster/docs/taskgraph.rst
@@ -92,17 +92,17 @@ Graph generation, as run via ``mach task
    result is the "full task graph".
 #. Filter the target tasks (based on a series of filters, such as try syntax,
    tree-specific specifications, etc). The result is the "target task set".
 #. Based on the full task graph, calculate the transitive closure of the target
    task set.  That is, the target tasks and all requirements of those tasks.
    The result is the "target task graph".
 #. Optimize the target task graph using task-specific optimization methods.
    The result is the "optimized task graph" with fewer nodes than the target
-   task graph.  See :ref:`optimization`.
+   task graph.  See :doc:`optimization`.
 #. Morph the graph. Morphs are like syntactic sugar: they keep the same meaning,
    but express it in a lower-level way. These generally work around limitations
    in the TaskCluster platform, such as number of dependencies or routes in
    a task.
 #. Create tasks for all tasks in the morphed task graph.
 Transitive Closure
@@ -132,17 +132,17 @@ Action Tasks
 Action Tasks are tasks which help you to schedule new jobs via Treeherder's
 "Add New Jobs" feature. The Decision Task creates a YAML file named
 ``action.yml`` which can be used to schedule Action Tasks after suitably replacing
 ``{{decision_task_id}}`` and ``{{task_labels}}``, which correspond to the decision
 task ID of the push and a comma separated list of task labels which need to be
-This task invokes ``mach taskgraph action-task`` which builds up a task graph of
+This task invokes ``mach taskgraph action-callback`` which builds up a task graph of
 the requested tasks. This graph is optimized using the tasks running initially in
 the same push, due to the decision task.
 So for instance, if you had already requested a build task in the ``try`` command,
 and you wish to add a test which depends on this build, the original build task
 is re-used.
@@ -178,26 +178,26 @@ using simple parameterized values, as fo
 .. _taskgraph-graph-config:
 Graph Configuration
 There are several configuration settings that are pertain to the entire
 taskgraph. These are specified in :file:`config.yml` at the root of the
-taskgraph configuration (typically :file:`taskcluster/ci`). The available
+taskgraph configuration (typically :file:`taskcluster/ci/`). The available
 settings are documented inline in `taskcluster/taskgraph/config.py
 .. _taskgraph-trust-domain:
 Trust Domain
 When publishing and signing releases, that tasks verify their definition and
 all upstream tasks come from a decision task based on a trusted tree. (see
-`chain-of-trust verification <http://scriptworker.readthedocs.io/en/latest/chain_of_trust.html>`_).
+`chain-of-trust verification <https://scriptworker.readthedocs.io/en/latest/chain_of_trust.html>`_).
 Firefox and Thunderbird share the taskgraph code and in particular, they have
 separate taskgraph configurations and in particular distinct decision tasks.
 Although they use identical docker images and toolchains, in order to track the
 province of those artifacts when verifying the chain of trust, they use
 different index paths to cache those artifacts. The ``trust-domain`` graph
 configuration controls the base path for indexing these cached artifacts.
--- a/taskcluster/docs/transforms.rst
+++ b/taskcluster/docs/transforms.rst
@@ -18,17 +18,17 @@ items up into smaller items (for example
 transforms rewrite the items entirely, with the final result being a task
 Transform Functions
 Each transformation looks like this:
-.. code-block::
+.. code-block:: python
     def transform_an_item(config, items):
         """This transform ..."""  # always a docstring!
         for item in items:
             # ..
             yield item
--- a/taskcluster/taskgraph/transforms/job/__init__.py
+++ b/taskcluster/taskgraph/transforms/job/__init__.py
@@ -178,18 +178,18 @@ def make_task_description(config, jobs):
 registry = {}
 def run_job_using(worker_implementation, run_using, schema=None, defaults={}):
     """Register the decorated function as able to set up a task description for
     jobs with the given worker implementation and `run.using` property.  If
     `schema` is given, the job's run field will be verified to match it.
-    The decorated function should have the signature `using_foo(config, job,
-    taskdesc) and should modify the task description in-place.  The skeleton of
+    The decorated function should have the signature `using_foo(config, job, taskdesc)`
+    and should modify the task description in-place.  The skeleton of
     the task description is already set up, but without a payload."""
     def wrap(func):
         for_run_using = registry.setdefault(run_using, {})
         if worker_implementation in for_run_using:
             raise Exception("run_job_using({!r}, {!r}) already exists: {!r}".format(
                 run_using, worker_implementation, for_run_using[run_using]))
         for_run_using[worker_implementation] = (func, schema, defaults)
         return func
--- a/taskcluster/taskgraph/transforms/upload_generated_sources.py
+++ b/taskcluster/taskgraph/transforms/upload_generated_sources.py
@@ -1,15 +1,14 @@
 # This Source Code Form is subject to the terms of the Mozilla Public
 # License, v. 2.0. If a copy of the MPL was not distributed with this
 # file, You can obtain one at http://mozilla.org/MPL/2.0/.
 Transform the upload-generated-files task description template,
-  taskcluster/ci/upload-generated-sources/kind.yml
-into an actual task description.
+taskcluster/ci/upload-generated-sources/kind.yml, into an actual task description.
 from __future__ import absolute_import, print_function, unicode_literals
 from taskgraph.transforms.base import TransformSequence
 from taskgraph.util.taskcluster import get_artifact_url
--- a/taskcluster/taskgraph/transforms/upload_symbols.py
+++ b/taskcluster/taskgraph/transforms/upload_symbols.py
@@ -1,15 +1,14 @@
 # This Source Code Form is subject to the terms of the Mozilla Public
 # License, v. 2.0. If a copy of the MPL was not distributed with this
 # file, You can obtain one at http://mozilla.org/MPL/2.0/.
 Transform the upload-symbols task description template,
-  taskcluster/ci/upload-symbols/job-template.yml
-into an actual task description.
+taskcluster/ci/upload-symbols/job-template.yml into an actual task description.
 from __future__ import absolute_import, print_function, unicode_literals
 from taskgraph.transforms.base import TransformSequence
 transforms = TransformSequence()
--- a/taskcluster/taskgraph/util/schema.py
+++ b/taskcluster/taskgraph/util/schema.py
@@ -70,17 +70,17 @@ def resolve_keyed_by(item, field, item_n
             test-platform: linux128
                     macosx-10.11/debug: 13
                     win.*: 6
                     default: 12
-    a call to `resolve_keyed_by(item, 'job.chunks', item['thing-name'])
+    a call to `resolve_keyed_by(item, 'job.chunks', item['thing-name'])`
     would mutate item in-place to::
             chunks: 12
     The `item_name` parameter is used to generate useful error messages.
     If extra_values are supplied, they represent additional values available