Bug 1714788 - Fixed more Sphinx warnings in 'mach doc' r=sylvestre DONTBUILD
authorsurajeet310 <surajeet310@gmail.com>
Thu, 10 Jun 2021 19:33:53 +0000
changeset 582727 783ea69c9ac2a4094d522a1f4bd6915862703341
parent 582726 10c4df33bc803ff408ba1f40e03ee1e7448b5eb0
child 582728 7cede79b33b2a7fb92b10b9580f721e858bc8475
push id38531
push usermlaza@mozilla.com
push dateFri, 11 Jun 2021 09:42:05 +0000
treeherdermozilla-central@47cad7171df3 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1714788 - Fixed more Sphinx warnings in 'mach doc' r=sylvestre DONTBUILD Differential Revision: https://phabricator.services.mozilla.com/D117419
--- a/build/docs/supported-configurations.rst
+++ b/build/docs/supported-configurations.rst
@@ -82,17 +82,17 @@ Supported Build Targets
 Tier-1 Targets
 The term **"Tier-1 platform"** refers to those platforms - CPU
 architectures and operating systems - that are the primary focus of
 Firefox development efforts. Tier-1 platforms are fully supported by
 Mozilla's `continuous integration processes <https://treeherder.mozilla.org/>`__ and the
-:ref:`Try Server`. Any proposed change to Firefox on these
+:ref:`Pushing to Try`. Any proposed change to Firefox on these
 platforms that results in build failures, test failures, performance
 regressions or other major problems **will be reverted immediately**.
 The **Tier-1 Firefox platforms** and their supported compilers are:
 -  Android on Linux x86, x86-64, ARMv7 and ARMv8-A (clang)
 -  Linux/x86 and x86-64 (gcc and clang)
--- a/docs/code-quality/coding-style/coding_style_cpp.rst
+++ b/docs/code-quality/coding-style/coding_style_cpp.rst
@@ -695,17 +695,17 @@ Strings
    i.e. ``u""_ns`` or ``""_ns``, instead of ``Empty[C]String()`` or
    ``const nsAuto[C]String empty;``. Use ``Empty[C]String()`` only if you
    specifically need a ``const ns[C]String&``, e.g. with the ternary operator
    or when you need to return/bind to a reference or take the address of the
    empty string.
 -  For 16-bit literal strings, use ``u"..."_ns`` or, if necessary
    ``NS_LITERAL_STRING_FROM_CSTRING(...)`` instead of ``nsAutoString()``
    or other ways that would do a run-time conversion.
-   See :ref:`Avoid runtime conversion of string literals` below.
+   See :ref:`Avoid runtime conversion of string literals <Avoid runtime conversion of string literals>` below.
 -  To compare a string with a literal, use ``.EqualsLiteral("...")``.
 -  Use ``str.IsEmpty()`` instead of ``str.Length() == 0``.
 -  Use ``str.Truncate()`` instead of ``str.SetLength(0)``,
    ``str.Assign(""_ns)`` or ``str.AssignLiteral("")``.
 -  Don't use functions from ``ctype.h`` (``isdigit()``, ``isalpha()``,
    etc.) or from ``strings.h`` (``strcasecmp()``, ``strncasecmp()``).
    These are locale-sensitive, which makes them inappropriate for
    processing protocol text. At the same time, they are too limited to
@@ -797,16 +797,17 @@ Free the string manually:
      return resultString.ToNewCString();
      char* warning = GetStringValue();
+.. _Avoid runtime conversion of string literals:
 Avoid runtime conversion of string literals
 It is very common to need to assign the value of a literal string, such
 as ``"Some String"``, into a unicode buffer. Instead of using ``nsString``'s
 ``AssignLiteral`` and ``AppendLiteral``, use a user-defined literal like `u"foo"_ns`
 instead. On most platforms, this will force the compiler to compile in a
--- a/docs/code-quality/index.rst
+++ b/docs/code-quality/index.rst
@@ -26,17 +26,17 @@ In this document, we try to list these a
    * - Custom clang checker
      - `Source <https://searchfox.org/mozilla-central/source/build/clang-plugin>`_
    * - Clang-Tidy
      - Yes
      - `bug 712350 <https://bugzilla.mozilla.org/show_bug.cgi?id=712350>`__
-     - :ref:`Static analysis <Mach static analysis>`
+     - :ref:`Static analysis <Static Analysis>`
      - https://clang.llvm.org/extra/clang-tidy/checks/list.html
    * - Clang analyzer
      - `bug 712350 <https://bugzilla.mozilla.org/show_bug.cgi?id=712350>`__
      - https://clang-analyzer.llvm.org/
    * - Coverity
--- a/docs/contributing/committing_rules_and_responsibilities.rst
+++ b/docs/contributing/committing_rules_and_responsibilities.rst
@@ -125,17 +125,17 @@ fixes.
 If the tree is marked as "closed", or if you have questions about any
 oranges or reds, you should contact the sheriff before checking in.
 Failures and backouts
 Patches which cause unit test failures (on :ref:`tier 1
-platforms <Supported build targets>`) will be backed out.
+platforms <Supported Build Hosts and Targets>`) will be backed out.
 Regressions on tier-2 platforms and in performance are not cause for a
 direct backout, but you will be expected to help fix them if quickly.
 *Note: Performance regressions require future data points to ensure a
 sustained regression and can take anywhere from 3 hours to 30 hours
 depending on the volume of the tree and build frequency. All regression
 alerts do get briefly investigated and bugs are filed if necessary.*
--- a/docs/contributing/contribution_quickref.rst
+++ b/docs/contributing/contribution_quickref.rst
@@ -192,17 +192,17 @@ more information.
 .. note::
     This requires `level 1 commit access <https://www.mozilla.org/about/governance/policies/commit/access-policy/>`__.
     You can ask your reviewer to submit the patch for you if you don't have that
     level of access.
-:ref:`More information <Try Server>`
+:ref:`More information <Pushing to Try>`
 To submit a patch
 To submit a patch for review, we use a tool called `moz-phab <https://pypi.org/project/MozPhab/>`__.
 To install it, run:
--- a/docs/contributing/debugging/debugging_a_minidump.rst
+++ b/docs/contributing/debugging/debugging_a_minidump.rst
@@ -110,17 +110,17 @@ source <https://chromium.googlesource.co
 contains a tool called
 `minidump-2-core <https://chromium.googlesource.com/breakpad/breakpad/+/master/src/tools/linux/md2core/>`__,
 which converts Linux minidumps into core files. If you checkout and
 build Breakpad, the binary will be at
 ``src/tools/linux/md2core/minidump-2-core``. Running the binary with the
 path to a Linux minidump will generate a core file on stdout which can
 then be loaded in gdb as usual. You will need to manually download the
 matching Firefox binaries, but then you can use the :ref:`GDB Python
-script <Downloading_symbols_on_Linux_Mac_OS_X>` to download symbols.
+script <Downloading symbols on Linux / Mac OS X>` to download symbols.
 The ``minidump-2-core`` source does not currently handle processing
 minidumps from a different CPU architecture than the system it was
 built for. If you want to use it on an ARM dump, for example, you may
 need to build the tool for ARM and run it under QEMU.
 Using other tools to inspect minidump data
--- a/docs/contributing/debugging/debugging_on_macos.rst
+++ b/docs/contributing/debugging/debugging_on_macos.rst
@@ -48,17 +48,17 @@ entitlement disallowed. **Rather than di
 implications), it is recommended to debug with try builds or local
 builds. The differences are explained below.**
 try Server Builds
 In most cases, developers needing to debug a build as close as possible
 to the production environment should use a :ref:`try
-build <Try Server>`. These
+build <Pushing to Try>`. These
 builds enable Hardened Runtime and only differ from production builds in
 that they are not Notarized which should not otherwise affect
 functionality, (other than the ability to easily launch the browser on
 macOS 10.15+ -- see quarantine note below). At this time, developers can
 obtain a Hardened Runtime build with the
 ``com.apple.security.get-task-allow`` entitlement allowed by submitting
 a try build and downloading the dmg generated by the "Rpk" shippable
 build job. A debugger can be attached to Firefox processes of these
--- a/docs/contributing/debugging/stacktrace_report.rst
+++ b/docs/contributing/debugging/stacktrace_report.rst
@@ -89,28 +89,29 @@ this location depending on your operatin
 * Windows : ``%APPDATA%\Mozilla\Firefox\Crash Reports\submitted\``
 * macOS : ``~/Library/Application Support/Firefox/Crash Reports/submitted/``
 * Linux : ``~/.mozilla/firefox/Crash Reports/submitted/``
 Each file in this folder contains one submitted crash report ID. You can
 check the modified or creation time for each file to discern which crash
 reports are relevant to your bug report.
+.. _Alternative ways to get a stacktrace:
 Alternative ways to get a stacktrace
 If the Mozilla crash reporter doesn't come up or isn't available you
 will need to obtain a stacktrace manually:
-See the article :ref:`Create a stacktrace with Windbg` for information
+See the article :ref:`Create a stacktrace with Windbg <How to get a stacktrace with WinDbg>` for information
 on how to do this.
 For a full process dump, see :ref:`How to get a process dump with Windows
 Task Manager`.
--- a/docs/contributing/editor.rst
+++ b/docs/contributing/editor.rst
@@ -18,16 +18,18 @@ Visual Studio Code
    :maxdepth: 1
 Go to :doc:`Visual Studio Code <vscode>` dedicated page.
+.. _VIM:
 There's C++ and Rust auto-completion support for VIM via
 `YouCompleteMe <https://github.com/ycm-core/YouCompleteMe/>`__. As long as that
@@ -104,17 +106,17 @@ packages.
   * `Emacs as C++ IDE <https://syamajala.github.io/c-ide.html>`__
 rtags (LLVM/Clang-based Code Indexing)
 Instructions for the installation of rtags are available at the
 `rtags github repo <https://github.com/Andersbakken/rtags>`__.
-rtags requires a :ref:`compilation database <CompileDB back-end / compileflags>`.
+rtags requires a :ref:`compilation database <CompileDB back-end-compileflags>`.
 In order for rtags to index correctly, included files need to be copied and
 unified compilation files need to be created. Either run a full build of the
 tree, or if you only want indexes to be generated for the moment, run the
 following commands (assuming you're in the gecko repo root):
 .. code::
     cd gecko_build_directory
@@ -143,17 +145,17 @@ is running in the background, the databa
 irony (LLVM/Clang-based Code Completion)
 Instructions on the installation of irony-mode are available at the
 `irony-mode github repo <https://github.com/Sarcasm/irony-mode>`__.
-irony-mode requires a :ref:`compilation database <CompileDB back-end / compileflags>`.
+irony-mode requires a :ref:`compilation database <CompileDB back-end-compileflags>`.
 Note that irony-mode, by default, uses elisp to parse the
 :code:`compile_commands.json` file. As gecko is a very large codebase, this
 file can easily be multiple megabytes, which can make irony-mode take multiple
 seconds to load on a gecko file.
 It is recommended to use `this fork of irony-mode <https://github.com/Hylen/irony-mode/tree/compilation-database-guessing-4-pull-request>`__,
 which requires the boost System and Filesystem libraries.
@@ -214,16 +216,18 @@ Visual Studio
 You can run a Visual Studio project by running:
 .. code::
     ./mach ide visualstudio
+.. _CompileDB back-end-compileflags:
 CompileDB back-end / compileflags
 You can generate a :code:`compile_commands.json` in your object directory by
 .. code::
--- a/docs/setup/contributing_code.rst
+++ b/docs/setup/contributing_code.rst
@@ -67,17 +67,17 @@ Fixing your bug
 We leave this in your hands. Here are some further resources to help:
 -  Check out
    `https://developer.mozilla.org/docs/Developer_Guide <https://developer.mozilla.org/docs/Developer_Guide>`_
    and its parent document,
--  Our :ref:`reviewer checklist <Reviewer_Checklist>` is very
+-  Our :ref:`reviewer checklist <Reviewer Checklist>` is very
    useful, if you have a patch near completion, and seek a favorable
 -  Utilize our build tool :ref:`mach`, its linting,
    static analysis, and other code checking features
 Getting your code reviewed
--- a/dom/docs/navigation/nav_replace.rst
+++ b/dom/docs/navigation/nav_replace.rst
@@ -1,16 +1,16 @@
 Objects Replaced by Navigations
 There are 3 major types of navigations, each of which can cause different
 objects to be replaced. The general rules look something like this:
 .. csv-table:: objects replaced or preserved across navigations
-   :header: "Class/Id", ":ref:`in-process navigations`", ":ref:`cross-process navigations`", ":ref:`cross-group navigations`"
+   :header: "Class/Id", ":ref:`in-process navigations <in-process navigations>`", ":ref:`cross-process navigations <cross-process navigations>`", ":ref:`cross-group navigations <cross-group navigations>`"
    "BrowserId [#bid]_", |preserve|, |preserve|, |preserve|
    "BrowsingContextWebProgress", |preserve|, |preserve|, |preserve|
    "BrowsingContextGroup", |preserve|, |preserve|, |replace|
    "BrowsingContext", |preserve|, |preserve|, |replace|
    "nsFrameLoader", |preserve|, |replace|, |replace|
    "RemoteBrowser", |preserve|, |replace|, |replace|
    "Browser{Parent,Child}", |preserve|, |replace|, |replace|
@@ -38,39 +38,45 @@ objects to be replaced. The general rule
    document, the same ``nsGlobalWindowInner`` may be used. This initial
    ``about:blank`` document is the one created when synchronously accessing a
    newly-created pop-up window from ``window.open``, or a newly-created
    document in an ``<iframe>``.
 Types of Navigations
+.. _in-process navigations:
 in-process navigations
 An in-process navigation is the traditional type of navigation, and the most
 common type of navigation when :ref:`Fission` is not enabled.
 These navigations are used when no process switching or BrowsingContext
 replacement is required, which includes most navigations with Fission
 disabled, and most same site-origin navigations when Fission is enabled.
+.. _cross-process navigations:
 cross-process navigations
 A cross-process navigation is used when a navigation requires a process
 switch to occur, and no BrowsingContext replacement is required. This is a
 common type of load when :ref:`Fission` is enabled, though it is also used
 for navigations to and from special URLs like ``file://`` URIs when
 Fission is disabled.
 These process changes are triggered by ``DocumentLoadListener`` when it
 determines that a process switch is required. See that class's documentation
 for more details.
+.. _cross-group navigations:
 cross-group navigations
 A cross-group navigation is used when the navigation's `response requires
 a browsing context group switch
 These types of switches may or may not cause the process to change, but will
--- a/netwerk/docs/cache2/doc.rst
+++ b/netwerk/docs/cache2/doc.rst
@@ -14,30 +14,32 @@ included.  This document only contains w
 be clear directly from the `IDL files <https://searchfox.org/mozilla-central/search?q=&path=cache2%2FnsICache&case=false&regexp=false>`_ comments.
 -  The cache API is **completely thread-safe** and **non-blocking**.
 -  There is **no IPC support**.  It's only accessible on the default
    chrome process.
 -  When there is no profile the new HTTP cache works, but everything is
    stored only in memory not obeying any particular limits.
+.. _nsICacheStorageService:
 -  The HTTP cache entry-point. Accessible as a service only, fully
    thread-safe, scriptable.
 -  `nsICacheStorageService.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheStorageService.idl>`_
 -   \ ``"@mozilla.org/netwerk/cache-storage-service;1"``
--  Provides methods accessing "storage" objects – see `nsICacheStorage` below – giving further access to cache entries – see :ref:`nsICacheEntry` more below – per specific URL.
+-  Provides methods accessing "storage" objects – see `nsICacheStorage` below – giving further access to cache entries – see :ref:`nsICacheEntry <nsICacheEntry>` more below – per specific URL.
 -  Currently we have 3 types of storages, all the access methods return
-   an :ref:`nsICacheStorage` object:
+   an :ref:`nsICacheStorage <nsICacheStorage>` object:
    -  **memory-only** (``memoryCacheStorage``): stores data only in a
       memory cache, data in this storage are never put to disk
    -  **disk** (``diskCacheStorage``): stores data on disk, but for
       existing entries also looks into the memory-only storage; when
       instructed via a special argument also primarily looks into
       application caches
@@ -55,24 +57,26 @@ nsICacheStorageService
    cache content or purge any intermediate memory structures:
    -  ``clear``– after it returns, all entries are no longer accessible
       through the cache APIs; the method is fast to execute and
       non-blocking in any way; the actual erase happens in background
    -  ``purgeFromMemory``– removes (schedules to remove) any
       intermediate cache data held in memory for faster access (more
-      about the :ref:`Intermediate_Memory_Caching` below)
+      about the :ref:`Intermediate_Memory_Caching <Intermediate_Memory_Caching>` below)
+.. _nsILoadContextInfo:
 -  Distinguishes the scope of the storage demanded to open.
--  Mandatory argument to ``*Storage`` methods of :ref:`nsICacheStorageService`.
+-  Mandatory argument to ``*Storage`` methods of :ref:`nsICacheStorageService <nsICacheStorageService>`.
 -  `nsILoadContextInfo.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/base/nsILoadContextInfo.idl>`_
 -  It is a helper interface wrapping following four arguments into a single one:
    -  **private-browsing** boolean flag
    -  **anonymous load** boolean flag
@@ -91,24 +95,25 @@ nsILoadContextInfo
    ``nsILoadContextInfo``\ arguments are identical, containing the same
    cache entries.
 -  Two storage objects created with in any way different
    ``nsILoadContextInfo``\ arguments are strictly and completely
    distinct and cache entries in them do not overlap even when having
    the same URIs.
+.. _nsICacheStorage:
 -  `nsICacheStorage.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheStorage.idl>`_
 -  Obtained from call to one of the ``*Storage`` methods on
-   :ref:`nsICacheStorageService`.
+   :ref:`nsICacheStorageService <nsICacheStorageService>`.
 -  Represents a distinct storage area (or scope) to put and get cache
    entries mapped by URLs into and from it.
 -  *Similarity with the old cache*\ : this interface may be with some
    limitations considered as a mirror to ``nsICacheSession``, but less
    generic and not inclining to abuse.
@@ -130,16 +135,18 @@ nsICacheEntryOpenCallback
       When the
       cache entry object is already present in memory or open as
       "force-new" (a.k.a "open-truncate") this callback is invoked
       sooner then the ``asyncOpenURI``\ method returns (i.e.
       immediately); there is currently no way to opt out of this feature
       (see `bug
       938186 <https://bugzilla.mozilla.org/show_bug.cgi?id=938186>`__).
+.. _nsICacheEntry:
 -  `nsICacheEntry.idl (searchfox) <https://searchfox.org/mozilla-central/source/netwerk/cache2/nsICacheEntry.idl>`_
 -  Obtained asynchronously or pseudo-asynchronously by a call to
@@ -181,16 +188,18 @@ Lifetime of a new entry
 -  When the *writer* still keeps the cache entry and has open and keeps
    open the output stream on it, other consumers may open input streams
    on the entry. The data will be available as the *writer* writes data
    to the cache entry's output stream immediately, even before the
    output stream is closed. This is called :ref:`concurrent
    read/write <Concurrent_read_and_write>`.
+.. _Concurrent_read_and_write:
 Concurrent read and write
 The cache supports reading a cache entry data while it is still being
 written by the first consumer - the *writer*.
 This can only be engaged for resumable responses that (`bug
 960902 <https://bugzilla.mozilla.org/show_bug.cgi?id=960902#c17>`__)
 don't need revalidation. Reason is that when the writer is interrupted
@@ -264,28 +273,28 @@ Lifetime of an existing entry that doesn
 Adding a new storage
 Should there be a need to add a new distinct storage for which the
 current scoping model would not be sufficient - use one of the two
 following ways:
 #. *[preferred]* Add a new ``<Your>Storage`` method on
-   :ref:`nsICacheStorageService` and if needed give it any arguments to
+   :ref:`nsICacheStorageService <nsICacheStorageService>` and if needed give it any arguments to
    specify the storage scope even more.  Implementation only should need
    to enhance the context key generation and parsing code and enhance
-   current - or create new when needed - :ref:`nsICacheStorage`
+   current - or create new when needed - :ref:`nsICacheStorage <nsICacheStorage>`
    implementations to carry any additional information down to the cache
 #. *[*\ **not**\ *preferred]* Add a new argument to
-   :ref:`nsILoadContextInfo`; **be careful
+   :ref:`nsILoadContextInfo <nsILoadContextInfo>`; **be careful
    here**, since some arguments on the context may not be known during
    the load time, what may lead to inter-context data leaking or
    implementation problems. Adding more distinction to
-   :ref:`nsILoadContextInfo` also affects all existing storages which may
+   :ref:`nsILoadContextInfo <nsILoadContextInfo>` also affects all existing storages which may
    not be always desirable.
 See context keying details for more information.
 The cache API is fully thread-safe.
@@ -320,17 +329,17 @@ IO thread, all operations pending on the
 the OPEN_PRIORITY level. The eviction preparation operation - i.e.
 clearing of the internal IO state - is then put to the end of the
 OPEN_PRIORITY level.  All this happens atomically.
 Storage and entries scopes
 A *scope key* string used to map the storage scope is based on the
-arguments of :ref:`nsILoadContextInfo`. The form is following (currently
+arguments of :ref:`nsILoadContextInfo <nsILoadContextInfo>`. The form is following (currently
 pending in `bug
 968593 <https://bugzilla.mozilla.org/show_bug.cgi?id=968593>`__):
 .. code:: bz_comment_text
 -  Regular expression: ``(.([-,]+)?,)*``
@@ -350,32 +359,32 @@ The cache entries are mapped by concanta
 passed to ``nsICacheStorage.asyncOpenURI``.  So that when an entry is
 being looked up, first the global hashtable is searched using the
 *scope key*. An entries hashtable is found. Then this entries hashtable
 is searched using <enhance-id:><uri> string. The elements in this
 hashtable are CacheEntry classes, see below.
 The hash tables keep a strong reference to ``CacheEntry`` objects. The
 only way to remove ``CacheEntry`` objects from memory is by exhausting a
-memory limit for :ref:`Intermediate_Memory_Caching`, what triggers a background
+memory limit for :ref:`Intermediate_Memory_Caching <Intermediate_Memory_Caching>`, what triggers a background
 process of purging expired and then least used entries from memory.
 Another way is to directly call the
 ``nsICacheStorageService.purge``\ method. That method is also called
 automatically on the ``"memory-pressure"`` indication.
 Access to the hashtables is protected by a global lock. We also - in a
 thread-safe manner - count the number of consumers keeping a reference
 on each entry. The open callback actually doesn't give the consumer
 directly the ``CacheEntry`` object but a small wrapper class that
 manages the 'consumer reference counter' on its cache entry. This both
 mechanisms ensure thread-safe access and also inability to have more
 then a single instance of a ``CacheEntry`` for a single
 <scope+enhanceID+URL> key.
-``CacheStorage``, implementing the :ref:`nsICacheStorage` interface, is
+``CacheStorage``, implementing the :ref:`nsICacheStorage <nsICacheStorage>` interface, is
 forwarding all calls to internal methods of ``CacheStorageService``
 passing itself as an argument.  ``CacheStorageService`` then generates
 the *scope key* using the ``nsILoadContextInfo`` of the storage. Note:
 CacheStorage keeps a thread-safe copy of ``nsILoadContextInfo`` passed
 to a ``*Storage`` method on ``nsICacheStorageService``.
 Invoking open callbacks
@@ -514,16 +523,18 @@ opening flags.  ``nsICacheStorage.asyncO
 **All consumers release the reference:**
 -  the entry may now be purged (removed) from memory when found expired
    or least used on overrun of the :ref:`memory
    pool <Intermediate_Memory_Caching>` limit
 -  when this is a disk cache entry, its cached data chunks are released
    from memory and only meta data is kept
+.. _Intermediate_Memory_Caching:
 Intermediate memory caching
 Intermediate memory caching of frequently used metadata (a.k.a. disk cache memory pool).
 For the disk cache entries we keep some of the most recent and most used
 cache entries' meta data in memory for immediate zero-thread-loop
 opening. The default size of this meta data memory pool is only 250kB
--- a/python/mach/docs/usage.rst
+++ b/python/mach/docs/usage.rst
@@ -141,16 +141,18 @@ The settings file follows the ``ini`` fo
     eslint = lint -l eslint
     telemetry = true
     default = fuzzy
+.. _Adding_mach_to_your_shell:
 Adding ``mach`` to your ``PATH``
 If you don't like having to type ``./mach``, you can add your source directory
 to your ``PATH``. DO NOT copy the script to a directory already in your
--- a/services/settings/docs/index.rst
+++ b/services/settings/docs/index.rst
@@ -160,17 +160,17 @@ The provided helper will:
     Pass the ``useCache`` option to use an IndexedDB-based cache, and unlock the following features:
     The ``fallbackToCache`` option allows callers to fall back to the cached file and record, if the requested record's attachment fails to download.
     This enables callers to always have a valid pair of attachment and record,
     provided that the attachment has been retrieved at least once.
     The ``fallbackToDump`` option activates a fallback to a dump that has been
     packaged with the client, when other ways to load the attachment have failed.
-    See :ref:`_services/packaging-attachments` for more information.
+    See :ref:`services/packaging-attachments <services/packaging-attachments>` for more information.
 .. note::
     A ``downloadAsBytes()`` method returning an ``ArrayBuffer`` is also available, if writing the attachment into the user profile is not necessary.
 .. _services/initial-data:
--- a/taskcluster/docs/task-graph.rst
+++ b/taskcluster/docs/task-graph.rst
@@ -24,14 +24,14 @@ work?
 -  The outputs from each task, log files, Firefox installers, and so on,
    appear attached to each task when it completes. These are viewable in
    the `Task
    Inspector <https://tools.taskcluster.net/task-inspector/>`__.
 All of this is controlled from within the Gecko source code, through a
 process called *task-graph generation*.  This means it's easy to add a
 new job or tweak the parameters of a job in a :ref:`try
-push <Try Server>`, eventually landing
+push <Pushing to Try>`, eventually landing
 that change on an integration branch.
 The details of task-graph generation are documented :ref:`in the source
 code itself <TaskCluster Task-Graph Generation>`,
 including a some :ref:`quick recipes for common changes <How Tos>`.
--- a/taskcluster/docs/try.rst
+++ b/taskcluster/docs/try.rst
@@ -60,17 +60,17 @@ at the root of the source dir. The JSON 
 Very simply, this will run any task label that gets passed in as well as their
 dependencies. While it is possible to manually commit this file and push to
-try, it is mainly meant to be a generation target for various :ref:`try server <Try Server>`
+try, it is mainly meant to be a generation target for various :ref:`try server <Pushing to Try>`
 choosers.  For example:
 .. parsed-literal::
     $ ./mach try fuzzy
 A list of all possible task labels can be obtained by running:
--- a/toolkit/crashreporter/docs/Using_the_Mozilla_symbol_server.rst
+++ b/toolkit/crashreporter/docs/Using_the_Mozilla_symbol_server.rst
@@ -100,16 +100,18 @@ be similar to:
     SYMCHK: msvcp140.dll         FAILED  - msvcp140.amd64.pdb mismatched or not found
     SYMCHK: ucrtbase.dll         FAILED  - ucrtbase.pdb mismatched or not found
     SYMCHK: vcruntime140.dll     FAILED  - vcruntime140.amd64.pdb mismatched or not found
     SYMCHK: helper.exe           FAILED  - Built without debugging information.
     SYMCHK: FAILED files = 27
     SYMCHK: PASSED + IGNORED files = 60
+.. _Downloading symbols on Linux / Mac OS X:
 Downloading symbols on Linux / Mac OS X
 If you are on Linux and running GDB 7.9 or newer, you can use `this GDB
 Python script <https://gist.github.com/luser/193572147c401c8a965c>`__ to
 automatically fetch symbols. You will need to source this script before
 loading symbols (the part where it spends a few seconds loading each .so
 when you attach gdb). If you want to reload symbols, you can try: