Bug 1492128: Vendor taskcluster==4.0.1; r=firefox-build-system-reviewers,gps
authorTom Prince <mozilla@hocat.ca>
Tue, 30 Oct 2018 17:50:49 +0000
changeset 499984 b314c0a4d03f7870ba7207df4edbff6629d2e406
parent 499983 1319783f48092e0dbc47cefe8424a98ce6a8cd3f
child 499985 2d15a1d91cb2ccd9cccba3ccbb5f4bfe13df5cd1
push id10290
push userffxbld-merge
push dateMon, 03 Dec 2018 16:23:23 +0000
treeherdermozilla-beta@700bed2445e6 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersfirefox-build-system-reviewers, gps
bugs1492128
milestone65.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1492128: Vendor taskcluster==4.0.1; r=firefox-build-system-reviewers,gps We can't use taskcluster 5.0.0 yet, because taskcluster-proxy does not support new-style URLs. Differential Revision: https://phabricator.services.mozilla.com/D10146
build/virtualenv_packages.txt
third_party/python/mohawk/PKG-INFO
third_party/python/mohawk/README.rst
third_party/python/mohawk/mohawk/__init__.py
third_party/python/mohawk/mohawk/base.py
third_party/python/mohawk/mohawk/bewit.py
third_party/python/mohawk/mohawk/exc.py
third_party/python/mohawk/mohawk/receiver.py
third_party/python/mohawk/mohawk/sender.py
third_party/python/mohawk/mohawk/tests.py
third_party/python/mohawk/mohawk/util.py
third_party/python/mohawk/setup.cfg
third_party/python/mohawk/setup.py
third_party/python/moz.build
third_party/python/requirements.in
third_party/python/requirements.txt
third_party/python/slugid/.gitignore
third_party/python/slugid/.travis.yml
third_party/python/slugid/LICENSE
third_party/python/slugid/requirements.txt
third_party/python/slugid/test.py
third_party/python/slugid/tox.ini
third_party/python/taskcluster/PKG-INFO
third_party/python/taskcluster/README.md
third_party/python/taskcluster/setup.cfg
third_party/python/taskcluster/setup.py
third_party/python/taskcluster/taskcluster/__init__.py
third_party/python/taskcluster/taskcluster/_client_importer.py
third_party/python/taskcluster/taskcluster/aio/__init__.py
third_party/python/taskcluster/taskcluster/aio/_client_importer.py
third_party/python/taskcluster/taskcluster/aio/asyncclient.py
third_party/python/taskcluster/taskcluster/aio/asyncutils.py
third_party/python/taskcluster/taskcluster/aio/auth.py
third_party/python/taskcluster/taskcluster/aio/authevents.py
third_party/python/taskcluster/taskcluster/aio/awsprovisioner.py
third_party/python/taskcluster/taskcluster/aio/awsprovisionerevents.py
third_party/python/taskcluster/taskcluster/aio/ec2manager.py
third_party/python/taskcluster/taskcluster/aio/github.py
third_party/python/taskcluster/taskcluster/aio/githubevents.py
third_party/python/taskcluster/taskcluster/aio/hooks.py
third_party/python/taskcluster/taskcluster/aio/index.py
third_party/python/taskcluster/taskcluster/aio/login.py
third_party/python/taskcluster/taskcluster/aio/notify.py
third_party/python/taskcluster/taskcluster/aio/pulse.py
third_party/python/taskcluster/taskcluster/aio/purgecache.py
third_party/python/taskcluster/taskcluster/aio/purgecacheevents.py
third_party/python/taskcluster/taskcluster/aio/queue.py
third_party/python/taskcluster/taskcluster/aio/queueevents.py
third_party/python/taskcluster/taskcluster/aio/secrets.py
third_party/python/taskcluster/taskcluster/aio/treeherderevents.py
third_party/python/taskcluster/taskcluster/auth.py
third_party/python/taskcluster/taskcluster/authevents.py
third_party/python/taskcluster/taskcluster/awsprovisioner.py
third_party/python/taskcluster/taskcluster/awsprovisionerevents.py
third_party/python/taskcluster/taskcluster/client.py
third_party/python/taskcluster/taskcluster/ec2manager.py
third_party/python/taskcluster/taskcluster/exceptions.py
third_party/python/taskcluster/taskcluster/github.py
third_party/python/taskcluster/taskcluster/githubevents.py
third_party/python/taskcluster/taskcluster/hooks.py
third_party/python/taskcluster/taskcluster/index.py
third_party/python/taskcluster/taskcluster/login.py
third_party/python/taskcluster/taskcluster/notify.py
third_party/python/taskcluster/taskcluster/pulse.py
third_party/python/taskcluster/taskcluster/purgecache.py
third_party/python/taskcluster/taskcluster/purgecacheevents.py
third_party/python/taskcluster/taskcluster/queue.py
third_party/python/taskcluster/taskcluster/queueevents.py
third_party/python/taskcluster/taskcluster/secrets.py
third_party/python/taskcluster/taskcluster/treeherderevents.py
third_party/python/taskcluster/taskcluster/utils.py
third_party/python/taskcluster/test/test_async.py
third_party/python/taskcluster/test/test_client.py
third_party/python/taskcluster/test/test_utils.py
--- a/build/virtualenv_packages.txt
+++ b/build/virtualenv_packages.txt
@@ -13,16 +13,17 @@ mozilla.pth:third_party/python/Click
 mozilla.pth:third_party/python/compare-locales
 mozilla.pth:third_party/python/configobj
 mozilla.pth:third_party/python/cram
 mozilla.pth:third_party/python/dlmanager
 mozilla.pth:third_party/python/enum34
 mozilla.pth:third_party/python/fluent
 mozilla.pth:third_party/python/funcsigs
 mozilla.pth:third_party/python/futures
+mozilla.pth:third_party/python/mohawk
 mozilla.pth:third_party/python/more-itertools
 mozilla.pth:third_party/python/mozilla-version
 mozilla.pth:third_party/python/pathlib2
 mozilla.pth:third_party/python/gyp/pylib
 mozilla.pth:third_party/python/python-hglib
 mozilla.pth:third_party/python/pluggy
 mozilla.pth:third_party/python/jsmin
 !windows:optional:setup.py:third_party/python/psutil:build_ext:--inplace
@@ -31,16 +32,18 @@ windows:mozilla.pth:third_party/python/p
 mozilla.pth:third_party/python/pylru
 mozilla.pth:third_party/python/which
 mozilla.pth:third_party/python/pystache
 mozilla.pth:third_party/python/pyyaml/lib
 mozilla.pth:third_party/python/requests
 mozilla.pth:third_party/python/requests-unixsocket
 mozilla.pth:third_party/python/scandir
 mozilla.pth:third_party/python/slugid
+mozilla.pth:third_party/python/taskcluster
+mozilla.pth:third_party/python/taskcluster-urls
 mozilla.pth:third_party/python/py
 mozilla.pth:third_party/python/pytest/src
 mozilla.pth:third_party/python/pytoml
 mozilla.pth:third_party/python/redo
 mozilla.pth:third_party/python/six
 mozilla.pth:third_party/python/voluptuous
 mozilla.pth:third_party/python/json-e
 mozilla.pth:build
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/PKG-INFO
@@ -0,0 +1,19 @@
+Metadata-Version: 1.1
+Name: mohawk
+Version: 0.3.4
+Summary: Library for Hawk HTTP authorization
+Home-page: https://github.com/kumar303/mohawk
+Author: Kumar McMillan, Austin King
+Author-email: kumar.mcmillan@gmail.com
+License: MPL 2.0 (Mozilla Public License)
+Description: UNKNOWN
+Platform: UNKNOWN
+Classifier: Intended Audience :: Developers
+Classifier: Natural Language :: English
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python :: 2
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 2.6
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Programming Language :: Python :: 3.3
+Classifier: Topic :: Internet :: WWW/HTTP
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/README.rst
@@ -0,0 +1,25 @@
+======
+Mohawk
+======
+.. image:: https://img.shields.io/pypi/v/mohawk.svg
+    :target: https://pypi.python.org/pypi/mohawk
+    :alt: Latest PyPI release
+
+.. image:: https://img.shields.io/pypi/dm/mohawk.svg
+    :target: https://pypi.python.org/pypi/mohawk
+    :alt: PyPI monthly download stats
+
+.. image:: https://travis-ci.org/kumar303/mohawk.svg?branch=master
+    :target: https://travis-ci.org/kumar303/mohawk
+    :alt: Travis master branch status
+
+.. image:: https://readthedocs.org/projects/mohawk/badge/?version=latest
+    :target: https://mohawk.readthedocs.io/en/latest/?badge=latest
+    :alt: Documentation status
+
+Mohawk is an alternate Python implementation of the
+`Hawk HTTP authorization scheme`_.
+
+Full documentation: https://mohawk.readthedocs.io/
+
+.. _`Hawk HTTP authorization scheme`: https://github.com/hueniverse/hawk
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/__init__.py
@@ -0,0 +1,2 @@
+from .sender import *
+from .receiver import *
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/base.py
@@ -0,0 +1,230 @@
+import logging
+import math
+import pprint
+
+import six
+from six.moves.urllib.parse import urlparse
+
+from .exc import (AlreadyProcessed,
+                  MacMismatch,
+                  MisComputedContentHash,
+                  TokenExpired)
+from .util import (calculate_mac,
+                   calculate_payload_hash,
+                   calculate_ts_mac,
+                   prepare_header_val,
+                   random_string,
+                   strings_match,
+                   utc_now)
+
+default_ts_skew_in_seconds = 60
+log = logging.getLogger(__name__)
+
+
+class HawkAuthority:
+
+    def _authorize(self, mac_type, parsed_header, resource,
+                   their_timestamp=None,
+                   timestamp_skew_in_seconds=default_ts_skew_in_seconds,
+                   localtime_offset_in_seconds=0,
+                   accept_untrusted_content=False):
+
+        now = utc_now(offset_in_seconds=localtime_offset_in_seconds)
+
+        their_hash = parsed_header.get('hash', '')
+        their_mac = parsed_header.get('mac', '')
+        mac = calculate_mac(mac_type, resource, their_hash)
+        if not strings_match(mac, their_mac):
+            raise MacMismatch('MACs do not match; ours: {ours}; '
+                              'theirs: {theirs}'
+                              .format(ours=mac, theirs=their_mac))
+
+        if 'hash' not in parsed_header and accept_untrusted_content:
+            # The request did not hash its content.
+            log.debug('NOT calculating/verifiying payload hash '
+                      '(no hash in header)')
+            check_hash = False
+            content_hash = None
+        else:
+            check_hash = True
+            content_hash = resource.gen_content_hash()
+
+        if check_hash and not their_hash:
+            log.info('request unexpectedly did not hash its content')
+
+        if check_hash:
+            if not strings_match(content_hash, their_hash):
+                # The hash declared in the header is incorrect.
+                # Content could have been tampered with.
+                log.debug('mismatched content: {content}'
+                          .format(content=repr(resource.content)))
+                log.debug('mismatched content-type: {typ}'
+                          .format(typ=repr(resource.content_type)))
+                raise MisComputedContentHash(
+                    'Our hash {ours} ({algo}) did not '
+                    'match theirs {theirs}'
+                    .format(ours=content_hash,
+                            theirs=their_hash,
+                            algo=resource.credentials['algorithm']))
+
+        if resource.seen_nonce:
+            if resource.seen_nonce(resource.credentials['id'],
+                                   parsed_header['nonce'],
+                                   parsed_header['ts']):
+                raise AlreadyProcessed('Nonce {nonce} with timestamp {ts} '
+                                       'has already been processed for {id}'
+                                       .format(nonce=parsed_header['nonce'],
+                                               ts=parsed_header['ts'],
+                                               id=resource.credentials['id']))
+        else:
+            log.warn('seen_nonce was None; not checking nonce. '
+                     'You may be vulnerable to replay attacks')
+
+        their_ts = int(their_timestamp or parsed_header['ts'])
+
+        if math.fabs(their_ts - now) > timestamp_skew_in_seconds:
+            message = ('token with UTC timestamp {ts} has expired; '
+                       'it was compared to {now}'
+                       .format(ts=their_ts, now=now))
+            tsm = calculate_ts_mac(now, resource.credentials)
+            if isinstance(tsm, six.binary_type):
+                tsm = tsm.decode('ascii')
+            www_authenticate = ('Hawk ts="{ts}", tsm="{tsm}", error="{error}"'
+                                .format(ts=now, tsm=tsm, error=message))
+            raise TokenExpired(message,
+                               localtime_in_seconds=now,
+                               www_authenticate=www_authenticate)
+
+        log.debug('authorized OK')
+
+    def _make_header(self, resource, mac, additional_keys=None):
+        keys = additional_keys
+        if not keys:
+            # These are the default header keys that you'd send with a
+            # request header. Response headers are odd because they
+            # exclude a bunch of keys.
+            keys = ('id', 'ts', 'nonce', 'ext', 'app', 'dlg')
+
+        header = u'Hawk mac="{mac}"'.format(mac=prepare_header_val(mac))
+
+        if resource.content_hash:
+            header = u'{header}, hash="{hash}"'.format(
+                header=header,
+                hash=prepare_header_val(resource.content_hash))
+
+        if 'id' in keys:
+            header = u'{header}, id="{id}"'.format(
+                header=header,
+                id=prepare_header_val(resource.credentials['id']))
+
+        if 'ts' in keys:
+            header = u'{header}, ts="{ts}"'.format(
+                header=header, ts=prepare_header_val(resource.timestamp))
+
+        if 'nonce' in keys:
+            header = u'{header}, nonce="{nonce}"'.format(
+                header=header, nonce=prepare_header_val(resource.nonce))
+
+        # These are optional so we need to check if they have values first.
+
+        if 'ext' in keys and resource.ext:
+            header = u'{header}, ext="{ext}"'.format(
+                header=header, ext=prepare_header_val(resource.ext))
+
+        if 'app' in keys and resource.app:
+            header = u'{header}, app="{app}"'.format(
+                header=header, app=prepare_header_val(resource.app))
+
+        if 'dlg' in keys and resource.dlg:
+            header = u'{header}, dlg="{dlg}"'.format(
+                header=header, dlg=prepare_header_val(resource.dlg))
+
+        log.debug('Hawk header for URL={url} method={method}: {header}'
+                  .format(url=resource.url, method=resource.method,
+                          header=header))
+        return header
+
+
+class Resource:
+    """
+    Normalized request/response resource.
+    """
+
+    def __init__(self, **kw):
+        self.credentials = kw.pop('credentials')
+        self.method = kw.pop('method').upper()
+        self.content = kw.pop('content', None)
+        self.content_type = kw.pop('content_type', None)
+        self.always_hash_content = kw.pop('always_hash_content', True)
+        self.ext = kw.pop('ext', None)
+        self.app = kw.pop('app', None)
+        self.dlg = kw.pop('dlg', None)
+
+        self.timestamp = str(kw.pop('timestamp', None) or utc_now())
+
+        self.nonce = kw.pop('nonce', None)
+        if self.nonce is None:
+            self.nonce = random_string(6)
+
+        # This is a lookup function for checking nonces.
+        self.seen_nonce = kw.pop('seen_nonce', None)
+
+        self.url = kw.pop('url')
+        if not self.url:
+            raise ValueError('url was empty')
+        url_parts = self.parse_url(self.url)
+        log.debug('parsed URL parts: \n{parts}'
+                  .format(parts=pprint.pformat(url_parts)))
+
+        self.name = url_parts['resource'] or ''
+        self.host = url_parts['hostname'] or ''
+        self.port = str(url_parts['port'])
+
+        if kw.keys():
+            raise TypeError('Unknown keyword argument(s): {0}'
+                            .format(kw.keys()))
+
+    @property
+    def content_hash(self):
+        if not hasattr(self, '_content_hash'):
+            raise AttributeError(
+                'Cannot access content_hash because it has not been generated')
+        return self._content_hash
+
+    def gen_content_hash(self):
+        if self.content is None or self.content_type is None:
+            if self.always_hash_content:
+                # Be really strict about allowing developers to skip content
+                # hashing. If they get this far they may be unintentiionally
+                # skipping it.
+                raise ValueError(
+                    'payload content and/or content_type cannot be '
+                    'empty without an explicit allowance')
+            log.debug('NOT hashing content')
+            self._content_hash = None
+        else:
+            self._content_hash = calculate_payload_hash(
+                self.content, self.credentials['algorithm'],
+                self.content_type)
+        return self.content_hash
+
+    def parse_url(self, url):
+        url_parts = urlparse(url)
+        url_dict = {
+            'scheme': url_parts.scheme,
+            'hostname': url_parts.hostname,
+            'port': url_parts.port,
+            'path': url_parts.path,
+            'resource': url_parts.path,
+            'query': url_parts.query,
+        }
+        if len(url_dict['query']) > 0:
+            url_dict['resource'] = '%s?%s' % (url_dict['resource'],
+                                              url_dict['query'])
+
+        if url_parts.port is None:
+            if url_parts.scheme == 'http':
+                url_dict['port'] = 80
+            elif url_parts.scheme == 'https':
+                url_dict['port'] = 443
+        return url_dict
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/bewit.py
@@ -0,0 +1,167 @@
+from base64 import urlsafe_b64encode, b64decode
+from collections import namedtuple
+import logging
+import re
+
+import six
+
+from .base import Resource
+from .util import (calculate_mac,
+                   utc_now)
+from .exc import (CredentialsLookupError,
+                  InvalidBewit,
+                  MacMismatch,
+                  TokenExpired)
+
+log = logging.getLogger(__name__)
+
+
+def get_bewit(resource):
+    """
+    Returns a bewit identifier for the resource as a string.
+
+    :param resource:
+        Resource to generate a bewit for
+    :type resource: `mohawk.base.Resource`
+    """
+    if resource.method != 'GET':
+        raise ValueError('bewits can only be generated for GET requests')
+    if resource.nonce != '':
+        raise ValueError('bewits must use an empty nonce')
+    mac = calculate_mac(
+        'bewit',
+        resource,
+        None,
+    )
+
+    if isinstance(mac, six.binary_type):
+        mac = mac.decode('ascii')
+
+    if resource.ext is None:
+        ext = ''
+    else:
+        ext = resource.ext
+
+    # Strip out \ from the client id
+    # since that can break parsing the response
+    # NB that the canonical implementation does not do this as of
+    # Oct 28, 2015, so this could break compat.
+    # We can leave \ in ext since validators can limit how many \ they split
+    # on (although again, the canonical implementation does not do this)
+    client_id = six.text_type(resource.credentials['id'])
+    if "\\" in client_id:
+        log.warn("Stripping backslash character(s) '\\' from client_id")
+        client_id = client_id.replace("\\", "")
+
+    # b64encode works only with bytes in python3, but all of our parameters are
+    # in unicode, so we need to encode them. The cleanest way to do this that
+    # works in both python 2 and 3 is to use string formatting to get a
+    # unicode string, and then explicitly encode it to bytes.
+    inner_bewit = u"{id}\\{exp}\\{mac}\\{ext}".format(
+        id=client_id,
+        exp=resource.timestamp,
+        mac=mac,
+        ext=ext,
+    )
+    inner_bewit_bytes = inner_bewit.encode('ascii')
+    bewit_bytes = urlsafe_b64encode(inner_bewit_bytes)
+    # Now decode the resulting bytes back to a unicode string
+    return bewit_bytes.decode('ascii')
+
+
+bewittuple = namedtuple('bewittuple', 'id expiration mac ext')
+
+
+def parse_bewit(bewit):
+    """
+    Returns a `bewittuple` representing the parts of an encoded bewit string.
+    This has the following named attributes:
+        (id, expiration, mac, ext)
+
+    :param bewit:
+        A base64 encoded bewit string
+    :type bewit: str
+    """
+    decoded_bewit = b64decode(bewit).decode('ascii')
+    bewit_parts = decoded_bewit.split("\\", 3)
+    if len(bewit_parts) != 4:
+        raise InvalidBewit('Expected 4 parts to bewit: %s' % decoded_bewit)
+    return bewittuple(*decoded_bewit.split("\\", 3))
+
+
+def strip_bewit(url):
+    """
+    Strips the bewit parameter out of a url.
+
+    Returns (encoded_bewit, stripped_url)
+
+    Raises InvalidBewit if no bewit found.
+
+    :param url:
+        The url containing a bewit parameter
+    :type url: str
+    """
+    m = re.search('[?&]bewit=([^&]+)', url)
+    if not m:
+        raise InvalidBewit('no bewit data found')
+    bewit = m.group(1)
+    stripped_url = url[:m.start()] + url[m.end():]
+    return bewit, stripped_url
+
+
+def check_bewit(url, credential_lookup, now=None):
+    """
+    Validates the given bewit.
+
+    Returns True if the resource has a valid bewit parameter attached,
+    or raises a subclass of HawkFail otherwise.
+
+    :param credential_lookup:
+        Callable to look up the credentials dict by sender ID.
+        The credentials dict must have the keys:
+        ``id``, ``key``, and ``algorithm``.
+        See :ref:`receiving-request` for an example.
+    :type credential_lookup: callable
+
+    :param now=None:
+        Unix epoch time for the current time to determine if bewit has expired.
+        If None, then the current time as given by utc_now() is used.
+    :type now=None: integer
+    """
+    raw_bewit, stripped_url = strip_bewit(url)
+    bewit = parse_bewit(raw_bewit)
+    try:
+        credentials = credential_lookup(bewit.id)
+    except LookupError:
+        raise CredentialsLookupError('Could not find credentials for ID {0}'
+                                     .format(bewit.id))
+
+    res = Resource(url=stripped_url,
+                   method='GET',
+                   credentials=credentials,
+                   timestamp=bewit.expiration,
+                   nonce='',
+                   ext=bewit.ext,
+                   )
+    mac = calculate_mac('bewit', res, None)
+    mac = mac.decode('ascii')
+
+    if mac != bewit.mac:
+        raise MacMismatch('bewit with mac {bewit_mac} did not match expected mac {expected_mac}'
+                          .format(bewit_mac=bewit.mac,
+                                  expected_mac=mac))
+
+    # Check that the timestamp isn't expired
+    if now is None:
+        # TODO: Add offset/skew
+        now = utc_now()
+    if int(bewit.expiration) < now:
+        # TODO: Refactor TokenExpired to handle this better
+        raise TokenExpired('bewit with UTC timestamp {ts} has expired; '
+                           'it was compared to {now}'
+                           .format(ts=bewit.expiration, now=now),
+                           localtime_in_seconds=now,
+                           www_authenticate=''
+                           )
+
+    return True
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/exc.py
@@ -0,0 +1,98 @@
+"""
+If you want to catch any exception that might be raised,
+catch :class:`mohawk.exc.HawkFail`.
+"""
+
+
+class HawkFail(Exception):
+    """
+    All Mohawk exceptions derive from this base.
+    """
+
+
+class MissingAuthorization(HawkFail):
+    """
+    No authorization header was sent by the client.
+    """
+
+
+class InvalidCredentials(HawkFail):
+    """
+    The specified Hawk credentials are invalid.
+
+    For example, the dict could be formatted incorrectly.
+    """
+
+
+class CredentialsLookupError(HawkFail):
+    """
+    A :class:`mohawk.Receiver` could not look up the
+    credentials for an incoming request.
+    """
+
+
+class BadHeaderValue(HawkFail):
+    """
+    There was an error with an attribute or value when parsing
+    or creating a Hawk header.
+    """
+
+
+class MacMismatch(HawkFail):
+    """
+    The locally calculated MAC did not match the MAC that was sent.
+    """
+
+
+class MisComputedContentHash(HawkFail):
+    """
+    The signature of the content did not match the actual content.
+    """
+
+
+class TokenExpired(HawkFail):
+    """
+    The timestamp on a message received has expired.
+
+    You may also receive this message if your server clock is out of sync.
+    Consider synchronizing it with something like `TLSdate`_.
+
+    If you are unable to synchronize your clock universally,
+    The `Hawk`_ spec mentions how you can `adjust`_
+    your sender's time to match that of the receiver in the case
+    of unexpected expiration.
+
+    The ``www_authenticate`` attribute of this exception is a header
+    that can be returned to the client. If the value is not None, it
+    will include a timestamp HMAC'd with the sender's credentials.
+    This will allow the client
+    to verify the value and safely apply an offset.
+
+    .. _`Hawk`: https://github.com/hueniverse/hawk
+    .. _`adjust`: https://github.com/hueniverse/hawk#future-time-manipulation
+    .. _`TLSdate`: http://linux-audit.com/tlsdate-the-secure-alternative-for-ntpd-ntpdate-and-rdate/
+    """
+    #: Current local time in seconds that was used to compare timestamps.
+    localtime_in_seconds = None
+    # A header containing an HMAC'd server timestamp that the sender can verify.
+    www_authenticate = None
+
+    def __init__(self, *args, **kw):
+        self.localtime_in_seconds = kw.pop('localtime_in_seconds')
+        self.www_authenticate = kw.pop('www_authenticate')
+        super(HawkFail, self).__init__(*args, **kw)
+
+
+class AlreadyProcessed(HawkFail):
+    """
+    The message has already been processed and cannot be re-processed.
+
+    See :ref:`nonce` for details.
+    """
+
+
+class InvalidBewit(HawkFail):
+    """
+    The bewit is invalid; e.g. it doesn't contain the right number of
+    parameters.
+    """
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/receiver.py
@@ -0,0 +1,170 @@
+import logging
+import sys
+
+from .base import default_ts_skew_in_seconds, HawkAuthority, Resource
+from .exc import CredentialsLookupError, MissingAuthorization
+from .util import (calculate_mac,
+                   parse_authorization_header,
+                   validate_credentials)
+
+__all__ = ['Receiver']
+log = logging.getLogger(__name__)
+
+
+class Receiver(HawkAuthority):
+    """
+    A Hawk authority that will receive and respond to requests.
+
+    :param credentials_map:
+        Callable to look up the credentials dict by sender ID.
+        The credentials dict must have the keys:
+        ``id``, ``key``, and ``algorithm``.
+        See :ref:`receiving-request` for an example.
+    :type credentials_map: callable
+
+    :param request_header:
+        A `Hawk`_ ``Authorization`` header
+        such as one created by :class:`mohawk.Sender`.
+    :type request_header: str
+
+    :param url: Absolute URL of the request.
+    :type url: str
+
+    :param method: Method of the request. E.G. POST, GET
+    :type method: str
+
+    :param content=None: Byte string of request body.
+    :type content=None: str
+
+    :param content_type=None: content-type header value for request.
+    :type content_type=None: str
+
+    :param accept_untrusted_content=False:
+        When True, allow requests that do not hash their content or
+        allow None type ``content`` and ``content_type``
+        arguments. Read :ref:`skipping-content-checks`
+        to learn more.
+    :type accept_untrusted_content=False: bool
+
+    :param localtime_offset_in_seconds=0:
+        Seconds to add to local time in case it's out of sync.
+    :type localtime_offset_in_seconds=0: float
+
+    :param timestamp_skew_in_seconds=60:
+        Max seconds until a message expires. Upon expiry,
+        :class:`mohawk.exc.TokenExpired` is raised.
+    :type timestamp_skew_in_seconds=60: float
+
+    .. _`Hawk`: https://github.com/hueniverse/hawk
+    """
+    #: Value suitable for a ``Server-Authorization`` header.
+    response_header = None
+
+    def __init__(self,
+                 credentials_map,
+                 request_header,
+                 url,
+                 method,
+                 content=None,
+                 content_type=None,
+                 seen_nonce=None,
+                 localtime_offset_in_seconds=0,
+                 accept_untrusted_content=False,
+                 timestamp_skew_in_seconds=default_ts_skew_in_seconds,
+                 **auth_kw):
+
+        self.response_header = None  # make into property that can raise exc?
+        self.credentials_map = credentials_map
+        self.seen_nonce = seen_nonce
+
+        log.debug('accepting request {header}'.format(header=request_header))
+
+        if not request_header:
+            raise MissingAuthorization()
+
+        parsed_header = parse_authorization_header(request_header)
+
+        try:
+            credentials = self.credentials_map(parsed_header['id'])
+        except LookupError:
+            etype, val, tb = sys.exc_info()
+            log.debug('Catching {etype}: {val}'.format(etype=etype, val=val))
+            raise CredentialsLookupError(
+                'Could not find credentials for ID {0}'
+                .format(parsed_header['id']))
+        validate_credentials(credentials)
+
+        resource = Resource(url=url,
+                            method=method,
+                            ext=parsed_header.get('ext', None),
+                            app=parsed_header.get('app', None),
+                            dlg=parsed_header.get('dlg', None),
+                            credentials=credentials,
+                            nonce=parsed_header['nonce'],
+                            seen_nonce=self.seen_nonce,
+                            content=content,
+                            timestamp=parsed_header['ts'],
+                            content_type=content_type)
+
+        self._authorize(
+            'header', parsed_header, resource,
+            timestamp_skew_in_seconds=timestamp_skew_in_seconds,
+            localtime_offset_in_seconds=localtime_offset_in_seconds,
+            accept_untrusted_content=accept_untrusted_content,
+            **auth_kw)
+
+        # Now that we verified an incoming request, we can re-use some of its
+        # properties to build our response header.
+
+        self.parsed_header = parsed_header
+        self.resource = resource
+
+    def respond(self,
+                content=None,
+                content_type=None,
+                always_hash_content=True,
+                ext=None):
+        """
+        Respond to the request.
+
+        This generates the :attr:`mohawk.Receiver.response_header`
+        attribute.
+
+        :param content=None: Byte string of response body that will be sent.
+        :type content=None: str
+
+        :param content_type=None: content-type header value for response.
+        :type content_type=None: str
+
+        :param always_hash_content=True:
+            When True, ``content`` and ``content_type`` cannot be None.
+            Read :ref:`skipping-content-checks` to learn more.
+        :type always_hash_content=True: bool
+
+        :param ext=None:
+            An external `Hawk`_ string. If not None, this value will be
+            signed so that the sender can trust it.
+        :type ext=None: str
+
+        .. _`Hawk`: https://github.com/hueniverse/hawk
+        """
+
+        log.debug('generating response header')
+
+        resource = Resource(url=self.resource.url,
+                            credentials=self.resource.credentials,
+                            ext=ext,
+                            app=self.parsed_header.get('app', None),
+                            dlg=self.parsed_header.get('dlg', None),
+                            method=self.resource.method,
+                            content=content,
+                            content_type=content_type,
+                            always_hash_content=always_hash_content,
+                            nonce=self.parsed_header['nonce'],
+                            timestamp=self.parsed_header['ts'])
+
+        mac = calculate_mac('response', resource, resource.gen_content_hash())
+
+        self.response_header = self._make_header(resource, mac,
+                                                 additional_keys=['ext'])
+        return self.response_header
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/sender.py
@@ -0,0 +1,178 @@
+import logging
+
+from .base import default_ts_skew_in_seconds, HawkAuthority, Resource
+from .util import (calculate_mac,
+                   parse_authorization_header,
+                   validate_credentials)
+
+__all__ = ['Sender']
+log = logging.getLogger(__name__)
+
+
+class Sender(HawkAuthority):
+    """
+    A Hawk authority that will emit requests and verify responses.
+
+    :param credentials: Dict of credentials with keys ``id``, ``key``,
+                        and ``algorithm``. See :ref:`usage` for an example.
+    :type credentials: dict
+
+    :param url: Absolute URL of the request.
+    :type url: str
+
+    :param method: Method of the request. E.G. POST, GET
+    :type method: str
+
+    :param content=None: Byte string of request body.
+    :type content=None: str
+
+    :param content_type=None: content-type header value for request.
+    :type content_type=None: str
+
+    :param always_hash_content=True:
+        When True, ``content`` and ``content_type`` cannot be None.
+        Read :ref:`skipping-content-checks` to learn more.
+    :type always_hash_content=True: bool
+
+    :param nonce=None:
+        A string that when coupled with the timestamp will
+        uniquely identify this request to prevent replays.
+        If None, a nonce will be generated for you.
+    :type nonce=None: str
+
+    :param ext=None:
+        An external `Hawk`_ string. If not None, this value will be signed
+        so that the receiver can trust it.
+    :type ext=None: str
+
+    :param app=None:
+        A `Hawk`_ application string. If not None, this value will be signed
+        so that the receiver can trust it.
+    :type app=None: str
+
+    :param dlg=None:
+        A `Hawk`_ delegation string. If not None, this value will be signed
+        so that the receiver can trust it.
+    :type dlg=None: str
+
+    :param seen_nonce=None:
+        A callable that returns True if a nonce has been seen.
+        See :ref:`nonce` for details.
+    :type seen_nonce=None: callable
+
+    .. _`Hawk`: https://github.com/hueniverse/hawk
+    """
+    #: Value suitable for an ``Authorization`` header.
+    request_header = None
+
+    def __init__(self, credentials,
+                 url,
+                 method,
+                 content=None,
+                 content_type=None,
+                 always_hash_content=True,
+                 nonce=None,
+                 ext=None,
+                 app=None,
+                 dlg=None,
+                 seen_nonce=None,
+                 # For easier testing:
+                 _timestamp=None):
+
+        self.reconfigure(credentials)
+        self.request_header = None
+        self.seen_nonce = seen_nonce
+
+        log.debug('generating request header')
+        self.req_resource = Resource(url=url,
+                                     credentials=self.credentials,
+                                     ext=ext,
+                                     app=app,
+                                     dlg=dlg,
+                                     nonce=nonce,
+                                     method=method,
+                                     content=content,
+                                     always_hash_content=always_hash_content,
+                                     timestamp=_timestamp,
+                                     content_type=content_type)
+
+        mac = calculate_mac('header', self.req_resource,
+                            self.req_resource.gen_content_hash())
+        self.request_header = self._make_header(self.req_resource, mac)
+
+    def accept_response(self,
+                        response_header,
+                        content=None,
+                        content_type=None,
+                        accept_untrusted_content=False,
+                        localtime_offset_in_seconds=0,
+                        timestamp_skew_in_seconds=default_ts_skew_in_seconds,
+                        **auth_kw):
+        """
+        Accept a response to this request.
+
+        :param response_header:
+            A `Hawk`_ ``Server-Authorization`` header
+            such as one created by :class:`mohawk.Receiver`.
+        :type response_header: str
+
+        :param content=None: Byte string of the response body received.
+        :type content=None: str
+
+        :param content_type=None:
+            Content-Type header value of the response received.
+        :type content_type=None: str
+
+        :param accept_untrusted_content=False:
+            When True, allow responses that do not hash their content or
+            allow None type ``content`` and ``content_type``
+            arguments. Read :ref:`skipping-content-checks`
+            to learn more.
+        :type accept_untrusted_content=False: bool
+
+        :param localtime_offset_in_seconds=0:
+            Seconds to add to local time in case it's out of sync.
+        :type localtime_offset_in_seconds=0: float
+
+        :param timestamp_skew_in_seconds=60:
+            Max seconds until a message expires. Upon expiry,
+            :class:`mohawk.exc.TokenExpired` is raised.
+        :type timestamp_skew_in_seconds=60: float
+
+        .. _`Hawk`: https://github.com/hueniverse/hawk
+        """
+        log.debug('accepting response {header}'
+                  .format(header=response_header))
+
+        parsed_header = parse_authorization_header(response_header)
+
+        resource = Resource(ext=parsed_header.get('ext', None),
+                            content=content,
+                            content_type=content_type,
+                            # The following response attributes are
+                            # in reference to the original request,
+                            # not to the reponse header:
+                            timestamp=self.req_resource.timestamp,
+                            nonce=self.req_resource.nonce,
+                            url=self.req_resource.url,
+                            method=self.req_resource.method,
+                            app=self.req_resource.app,
+                            dlg=self.req_resource.dlg,
+                            credentials=self.credentials,
+                            seen_nonce=self.seen_nonce)
+
+        self._authorize(
+            'response', parsed_header, resource,
+            # Per Node lib, a responder macs the *sender's* timestamp.
+            # It does not create its own timestamp.
+            # I suppose a slow response could time out here. Maybe only check
+            # mac failures, not timeouts?
+            their_timestamp=resource.timestamp,
+            timestamp_skew_in_seconds=timestamp_skew_in_seconds,
+            localtime_offset_in_seconds=localtime_offset_in_seconds,
+            accept_untrusted_content=accept_untrusted_content,
+            **auth_kw)
+
+    def reconfigure(self, credentials):
+        validate_credentials(credentials)
+        self.credentials = credentials
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/tests.py
@@ -0,0 +1,823 @@
+import sys
+from unittest import TestCase
+from base64 import b64decode, urlsafe_b64encode
+
+import mock
+from nose.tools import eq_, raises
+import six
+
+from . import Receiver, Sender
+from .base import Resource
+from .exc import (AlreadyProcessed,
+                  BadHeaderValue,
+                  CredentialsLookupError,
+                  InvalidCredentials,
+                  MacMismatch,
+                  MisComputedContentHash,
+                  MissingAuthorization,
+                  TokenExpired,
+                  InvalidBewit)
+from .util import (parse_authorization_header,
+                   utc_now,
+                   calculate_ts_mac,
+                   validate_credentials)
+from .bewit import (get_bewit,
+                    check_bewit,
+                    strip_bewit,
+                    parse_bewit)
+
+
+class Base(TestCase):
+
+    def setUp(self):
+        self.credentials = {
+            'id': 'my-hawk-id',
+            'key': 'my hAwK sekret',
+            'algorithm': 'sha256',
+        }
+
+        # This callable might be replaced by tests.
+        def seen_nonce(id, nonce, ts):
+            return False
+        self.seen_nonce = seen_nonce
+
+    def credentials_map(self, id):
+        # Pretend this is doing something more interesting like looking up
+        # a credentials by ID in a database.
+        if self.credentials['id'] != id:
+            raise LookupError('No credentialsuration for Hawk ID {id}'
+                              .format(id=id))
+        return self.credentials
+
+
+class TestConfig(Base):
+
+    @raises(InvalidCredentials)
+    def test_no_id(self):
+        c = self.credentials.copy()
+        del c['id']
+        validate_credentials(c)
+
+    @raises(InvalidCredentials)
+    def test_no_key(self):
+        c = self.credentials.copy()
+        del c['key']
+        validate_credentials(c)
+
+    @raises(InvalidCredentials)
+    def test_no_algo(self):
+        c = self.credentials.copy()
+        del c['algorithm']
+        validate_credentials(c)
+
+    @raises(InvalidCredentials)
+    def test_no_credentials(self):
+        validate_credentials(None)
+
+    def test_non_dict_credentials(self):
+        class WeirdThing(object):
+            def __getitem__(self, key):
+                return 'whatever'
+        validate_credentials(WeirdThing())
+
+
+class TestSender(Base):
+
+    def setUp(self):
+        super(TestSender, self).setUp()
+        self.url = 'http://site.com/foo?bar=1'
+
+    def Sender(self, method='GET', **kw):
+        credentials = kw.pop('credentials', self.credentials)
+        kw.setdefault('content', '')
+        kw.setdefault('content_type', '')
+        sender = Sender(credentials, self.url, method, **kw)
+        return sender
+
+    def receive(self, request_header, url=None, method='GET', **kw):
+        credentials_map = kw.pop('credentials_map', self.credentials_map)
+        kw.setdefault('content', '')
+        kw.setdefault('content_type', '')
+        kw.setdefault('seen_nonce', self.seen_nonce)
+        return Receiver(credentials_map, request_header,
+                        url or self.url, method, **kw)
+
+    def test_get_ok(self):
+        method = 'GET'
+        sn = self.Sender(method=method)
+        self.receive(sn.request_header, method=method)
+
+    def test_post_ok(self):
+        method = 'POST'
+        sn = self.Sender(method=method)
+        self.receive(sn.request_header, method=method)
+
+    def test_post_content_ok(self):
+        method = 'POST'
+        content = 'foo=bar&baz=2'
+        sn = self.Sender(method=method, content=content)
+        self.receive(sn.request_header, method=method, content=content)
+
+    def test_post_content_type_ok(self):
+        method = 'POST'
+        content = '{"bar": "foobs"}'
+        content_type = 'application/json'
+        sn = self.Sender(method=method, content=content,
+                         content_type=content_type)
+        self.receive(sn.request_header, method=method, content=content,
+                     content_type=content_type)
+
+    def test_post_content_type_with_trailing_charset(self):
+        method = 'POST'
+        content = '{"bar": "foobs"}'
+        content_type = 'application/json; charset=utf8'
+        sn = self.Sender(method=method, content=content,
+                         content_type=content_type)
+        self.receive(sn.request_header, method=method, content=content,
+                     content_type='application/json; charset=other')
+
+    @raises(ValueError)
+    def test_missing_payload_details(self):
+        self.Sender(method='POST', content=None, content_type=None)
+
+    def test_skip_payload_hashing(self):
+        method = 'POST'
+        content = '{"bar": "foobs"}'
+        content_type = 'application/json'
+        sn = self.Sender(method=method, content=None, content_type=None,
+                         always_hash_content=False)
+        self.receive(sn.request_header, method=method, content=content,
+                     content_type=content_type,
+                     accept_untrusted_content=True)
+
+    @raises(ValueError)
+    def test_cannot_skip_content_only(self):
+        self.Sender(method='POST', content=None,
+                    content_type='application/json')
+
+    @raises(ValueError)
+    def test_cannot_skip_content_type_only(self):
+        self.Sender(method='POST', content='{"foo": "bar"}',
+                    content_type=None)
+
+    @raises(MacMismatch)
+    def test_tamper_with_host(self):
+        sn = self.Sender()
+        self.receive(sn.request_header, url='http://TAMPERED-WITH.com')
+
+    @raises(MacMismatch)
+    def test_tamper_with_method(self):
+        sn = self.Sender(method='GET')
+        self.receive(sn.request_header, method='POST')
+
+    @raises(MacMismatch)
+    def test_tamper_with_path(self):
+        sn = self.Sender()
+        self.receive(sn.request_header,
+                     url='http://site.com/TAMPERED?bar=1')
+
+    @raises(MacMismatch)
+    def test_tamper_with_query(self):
+        sn = self.Sender()
+        self.receive(sn.request_header,
+                     url='http://site.com/foo?bar=TAMPERED')
+
+    @raises(MacMismatch)
+    def test_tamper_with_scheme(self):
+        sn = self.Sender()
+        self.receive(sn.request_header, url='https://site.com/foo?bar=1')
+
+    @raises(MacMismatch)
+    def test_tamper_with_port(self):
+        sn = self.Sender()
+        self.receive(sn.request_header,
+                     url='http://site.com:8000/foo?bar=1')
+
+    @raises(MisComputedContentHash)
+    def test_tamper_with_content(self):
+        sn = self.Sender()
+        self.receive(sn.request_header, content='stuff=nope')
+
+    def test_non_ascii_content(self):
+        content = u'Ivan Kristi\u0107'
+        sn = self.Sender(content=content)
+        self.receive(sn.request_header, content=content)
+
+    @raises(MacMismatch)
+    def test_tamper_with_content_type(self):
+        sn = self.Sender(method='POST')
+        self.receive(sn.request_header, content_type='application/json')
+
+    @raises(AlreadyProcessed)
+    def test_nonce_fail(self):
+
+        def seen_nonce(id, nonce, ts):
+            return True
+
+        sn = self.Sender()
+
+        self.receive(sn.request_header, seen_nonce=seen_nonce)
+
+    def test_nonce_ok(self):
+
+        def seen_nonce(id, nonce, ts):
+            return False
+
+        sn = self.Sender(seen_nonce=seen_nonce)
+        self.receive(sn.request_header)
+
+    @raises(TokenExpired)
+    def test_expired_ts(self):
+        now = utc_now() - 120
+        sn = self.Sender(_timestamp=now)
+        self.receive(sn.request_header)
+
+    def test_expired_exception_reports_localtime(self):
+        now = utc_now()
+        ts = now - 120
+        sn = self.Sender(_timestamp=ts)  # force expiry
+
+        exc = None
+        with mock.patch('mohawk.base.utc_now') as fake_now:
+            fake_now.return_value = now
+            try:
+                self.receive(sn.request_header)
+            except:
+                etype, exc, tb = sys.exc_info()
+
+        eq_(type(exc), TokenExpired)
+        eq_(exc.localtime_in_seconds, now)
+
+    def test_localtime_offset(self):
+        now = utc_now() - 120
+        sn = self.Sender(_timestamp=now)
+        # Without an offset this will raise an expired exception.
+        self.receive(sn.request_header, localtime_offset_in_seconds=-120)
+
+    def test_localtime_skew(self):
+        now = utc_now() - 120
+        sn = self.Sender(_timestamp=now)
+        # Without an offset this will raise an expired exception.
+        self.receive(sn.request_header, timestamp_skew_in_seconds=120)
+
+    @raises(MacMismatch)
+    def test_hash_tampering(self):
+        sn = self.Sender()
+        header = sn.request_header.replace('hash="', 'hash="nope')
+        self.receive(header)
+
+    @raises(MacMismatch)
+    def test_bad_secret(self):
+        cfg = {
+            'id': 'my-hawk-id',
+            'key': 'INCORRECT; YOU FAIL',
+            'algorithm': 'sha256',
+        }
+        sn = self.Sender(credentials=cfg)
+        self.receive(sn.request_header)
+
+    @raises(MacMismatch)
+    def test_unexpected_algorithm(self):
+        cr = self.credentials.copy()
+        cr['algorithm'] = 'sha512'
+        sn = self.Sender(credentials=cr)
+
+        # Validate with mismatched credentials (sha256).
+        self.receive(sn.request_header)
+
+    @raises(InvalidCredentials)
+    def test_invalid_credentials(self):
+        cfg = self.credentials.copy()
+        # Create an invalid credentials.
+        del cfg['algorithm']
+
+        self.Sender(credentials=cfg)
+
+    @raises(CredentialsLookupError)
+    def test_unknown_id(self):
+        cr = self.credentials.copy()
+        cr['id'] = 'someone-else'
+        sn = self.Sender(credentials=cr)
+
+        self.receive(sn.request_header)
+
+    @raises(MacMismatch)
+    def test_bad_ext(self):
+        sn = self.Sender(ext='my external data')
+
+        header = sn.request_header.replace('my external data', 'TAMPERED')
+        self.receive(header)
+
+    def test_ext_with_quotes(self):
+        sn = self.Sender(ext='quotes=""')
+        self.receive(sn.request_header)
+        parsed = parse_authorization_header(sn.request_header)
+        eq_(parsed['ext'], 'quotes=""')
+
+    def test_ext_with_new_line(self):
+        sn = self.Sender(ext="new line \n in the middle")
+        self.receive(sn.request_header)
+        parsed = parse_authorization_header(sn.request_header)
+        eq_(parsed['ext'], "new line \n in the middle")
+
+    def test_ext_with_equality_sign(self):
+        sn = self.Sender(ext="foo=bar&foo2=bar2;foo3=bar3")
+        self.receive(sn.request_header)
+        parsed = parse_authorization_header(sn.request_header)
+        eq_(parsed['ext'], "foo=bar&foo2=bar2;foo3=bar3")
+
+    @raises(BadHeaderValue)
+    def test_ext_with_illegal_chars(self):
+        self.Sender(ext="something like \t is illegal")
+
+    @raises(BadHeaderValue)
+    def test_ext_with_illegal_unicode(self):
+        self.Sender(ext=u'Ivan Kristi\u0107')
+
+    @raises(BadHeaderValue)
+    def test_ext_with_illegal_utf8(self):
+        # This isn't allowed because the escaped byte chars are out of
+        # range. It's a little odd but this is what the Node lib does
+        # implicitly with its regex.
+        self.Sender(ext=u'Ivan Kristi\u0107'.encode('utf8'))
+
+    def test_app_ok(self):
+        app = 'custom-app'
+        sn = self.Sender(app=app)
+        self.receive(sn.request_header)
+        parsed = parse_authorization_header(sn.request_header)
+        eq_(parsed['app'], app)
+
+    @raises(MacMismatch)
+    def test_tampered_app(self):
+        app = 'custom-app'
+        sn = self.Sender(app=app)
+        header = sn.request_header.replace(app, 'TAMPERED-WITH')
+        self.receive(header)
+
+    def test_dlg_ok(self):
+        dlg = 'custom-dlg'
+        sn = self.Sender(dlg=dlg)
+        self.receive(sn.request_header)
+        parsed = parse_authorization_header(sn.request_header)
+        eq_(parsed['dlg'], dlg)
+
+    @raises(MacMismatch)
+    def test_tampered_dlg(self):
+        dlg = 'custom-dlg'
+        sn = self.Sender(dlg=dlg, app='some-app')
+        header = sn.request_header.replace(dlg, 'TAMPERED-WITH')
+        self.receive(header)
+
+
+class TestReceiver(Base):
+
+    def setUp(self):
+        super(TestReceiver, self).setUp()
+        self.url = 'http://site.com/'
+        self.sender = None
+        self.receiver = None
+
+    def receive(self, method='GET', **kw):
+        url = kw.pop('url', self.url)
+        sender = kw.pop('sender', None)
+        sender_kw = kw.pop('sender_kw', {})
+        sender_kw.setdefault('content', '')
+        sender_kw.setdefault('content_type', '')
+        sender_url = kw.pop('sender_url', url)
+
+        credentials_map = kw.pop('credentials_map',
+                                 lambda id: self.credentials)
+
+        if sender:
+            self.sender = sender
+        else:
+            self.sender = Sender(self.credentials, sender_url, method,
+                                 **sender_kw)
+
+        kw.setdefault('content', '')
+        kw.setdefault('content_type', '')
+        self.receiver = Receiver(credentials_map,
+                                 self.sender.request_header, url, method,
+                                 **kw)
+
+    def respond(self, **kw):
+        accept_kw = kw.pop('accept_kw', {})
+        accept_kw.setdefault('content', '')
+        accept_kw.setdefault('content_type', '')
+        receiver = kw.pop('receiver', self.receiver)
+
+        kw.setdefault('content', '')
+        kw.setdefault('content_type', '')
+        receiver.respond(**kw)
+        self.sender.accept_response(receiver.response_header, **accept_kw)
+
+        return receiver.response_header
+
+    @raises(InvalidCredentials)
+    def test_invalid_credentials_lookup(self):
+        # Return invalid credentials.
+        self.receive(credentials_map=lambda *a: {})
+
+    def test_get_ok(self):
+        method = 'GET'
+        self.receive(method=method)
+        self.respond()
+
+    def test_post_ok(self):
+        method = 'POST'
+        self.receive(method=method)
+        self.respond()
+
+    @raises(MisComputedContentHash)
+    def test_respond_with_wrong_content(self):
+        self.receive()
+        self.respond(content='real content',
+                     accept_kw=dict(content='TAMPERED WITH'))
+
+    @raises(MisComputedContentHash)
+    def test_respond_with_wrong_content_type(self):
+        self.receive()
+        self.respond(content_type='text/html',
+                     accept_kw=dict(content_type='application/json'))
+
+    @raises(MissingAuthorization)
+    def test_missing_authorization(self):
+        Receiver(lambda id: self.credentials, None, '/', 'GET')
+
+    @raises(MacMismatch)
+    def test_respond_with_wrong_url(self):
+        self.receive(url='http://fakesite.com')
+        wrong_receiver = self.receiver
+
+        self.receive(url='http://realsite.com')
+
+        self.respond(receiver=wrong_receiver)
+
+    @raises(MacMismatch)
+    def test_respond_with_wrong_method(self):
+        self.receive(method='GET')
+        wrong_receiver = self.receiver
+
+        self.receive(method='POST')
+
+        self.respond(receiver=wrong_receiver)
+
+    @raises(MacMismatch)
+    def test_respond_with_wrong_nonce(self):
+        self.receive(sender_kw=dict(nonce='another-nonce'))
+        wrong_receiver = self.receiver
+
+        self.receive()
+
+        # The nonce must match the one sent in the original request.
+        self.respond(receiver=wrong_receiver)
+
+    def test_respond_with_unhashed_content(self):
+        self.receive()
+
+        self.respond(always_hash_content=False, content=None,
+                     content_type=None,
+                     accept_kw=dict(accept_untrusted_content=True))
+
+    @raises(TokenExpired)
+    def test_respond_with_expired_ts(self):
+        self.receive()
+        hdr = self.receiver.respond(content='', content_type='')
+
+        with mock.patch('mohawk.base.utc_now') as fn:
+            fn.return_value = 0  # force an expiry
+            try:
+                self.sender.accept_response(hdr, content='', content_type='')
+            except TokenExpired:
+                etype, exc, tb = sys.exc_info()
+                hdr = parse_authorization_header(exc.www_authenticate)
+                calculated = calculate_ts_mac(fn(), self.credentials)
+                if isinstance(calculated, six.binary_type):
+                    calculated = calculated.decode('ascii')
+                eq_(hdr['tsm'], calculated)
+                raise
+
+    def test_respond_with_bad_ts_skew_ok(self):
+        now = utc_now() - 120
+
+        self.receive()
+        hdr = self.receiver.respond(content='', content_type='')
+
+        with mock.patch('mohawk.base.utc_now') as fn:
+            fn.return_value = now
+
+            # Without an offset this will raise an expired exception.
+            self.sender.accept_response(hdr, content='', content_type='',
+                                        timestamp_skew_in_seconds=120)
+
+    def test_respond_with_ext(self):
+        self.receive()
+
+        ext = 'custom-ext'
+        self.respond(ext=ext)
+        header = parse_authorization_header(self.receiver.response_header)
+        eq_(header['ext'], ext)
+
+    @raises(MacMismatch)
+    def test_respond_with_wrong_app(self):
+        self.receive(sender_kw=dict(app='TAMPERED-WITH', dlg='delegation'))
+        self.receiver.respond(content='', content_type='')
+        wrong_receiver = self.receiver
+
+        self.receive(sender_kw=dict(app='real-app', dlg='delegation'))
+
+        self.sender.accept_response(wrong_receiver.response_header,
+                                    content='', content_type='')
+
+    @raises(MacMismatch)
+    def test_respond_with_wrong_dlg(self):
+        self.receive(sender_kw=dict(app='app', dlg='TAMPERED-WITH'))
+        self.receiver.respond(content='', content_type='')
+        wrong_receiver = self.receiver
+
+        self.receive(sender_kw=dict(app='app', dlg='real-dlg'))
+
+        self.sender.accept_response(wrong_receiver.response_header,
+                                    content='', content_type='')
+
+    @raises(MacMismatch)
+    def test_receive_wrong_method(self):
+        self.receive(method='GET')
+        wrong_sender = self.sender
+        self.receive(method='POST', sender=wrong_sender)
+
+    @raises(MacMismatch)
+    def test_receive_wrong_url(self):
+        self.receive(url='http://fakesite.com/')
+        wrong_sender = self.sender
+        self.receive(url='http://realsite.com/', sender=wrong_sender)
+
+    @raises(MisComputedContentHash)
+    def test_receive_wrong_content(self):
+        self.receive(sender_kw=dict(content='real request'),
+                     content='real request')
+        wrong_sender = self.sender
+        self.receive(content='TAMPERED WITH', sender=wrong_sender)
+
+    @raises(MisComputedContentHash)
+    def test_unexpected_unhashed_content(self):
+        self.receive(sender_kw=dict(content=None, content_type=None,
+                                    always_hash_content=False))
+
+    @raises(ValueError)
+    def test_cannot_receive_empty_content_only(self):
+        content_type = 'text/plain'
+        self.receive(sender_kw=dict(content='<content>',
+                                    content_type=content_type),
+                     content=None, content_type=content_type)
+
+    @raises(ValueError)
+    def test_cannot_receive_empty_content_type_only(self):
+        content = '<content>'
+        self.receive(sender_kw=dict(content=content,
+                                    content_type='text/plain'),
+                     content=content, content_type=None)
+
+    @raises(MisComputedContentHash)
+    def test_receive_wrong_content_type(self):
+        self.receive(sender_kw=dict(content_type='text/html'),
+                     content_type='text/html')
+        wrong_sender = self.sender
+
+        self.receive(content_type='application/json',
+                     sender=wrong_sender)
+
+
+class TestSendAndReceive(Base):
+
+    def test(self):
+        credentials = {
+            'id': 'some-id',
+            'key': 'some secret',
+            'algorithm': 'sha256'
+        }
+
+        url = 'https://my-site.com/'
+        method = 'POST'
+
+        # The client sends a request with a Hawk header.
+        content = 'foo=bar&baz=nooz'
+        content_type = 'application/x-www-form-urlencoded'
+
+        sender = Sender(credentials,
+                        url, method,
+                        content=content,
+                        content_type=content_type)
+
+        # The server receives a request and authorizes access.
+        receiver = Receiver(lambda id: credentials,
+                            sender.request_header,
+                            url, method,
+                            content=content,
+                            content_type=content_type)
+
+        # The server responds with a similar Hawk header.
+        content = 'we are friends'
+        content_type = 'text/plain'
+        receiver.respond(content=content,
+                         content_type=content_type)
+
+        # The client receives a response and authorizes access.
+        sender.accept_response(receiver.response_header,
+                               content=content,
+                               content_type=content_type)
+
+
+class TestBewit(Base):
+
+    # Test cases copied from
+    # https://github.com/hueniverse/hawk/blob/492632da51ecedd5f59ce96f081860ad24ce6532/test/uri.js
+
+    def setUp(self):
+        self.credentials = {
+            'id': '123456',
+            'key': '2983d45yun89q',
+            'algorithm': 'sha256',
+        }
+
+    def make_credential_lookup(self, credentials_map):
+        # Helper function to make a lookup function given a dictionary of
+        # credentials
+        def lookup(client_id):
+            # Will raise a KeyError if missing; which is a subclass of
+            # LookupError
+            return credentials_map[client_id]
+        return lookup
+
+    def test_bewit(self):
+        res = Resource(url='https://example.com/somewhere/over/the/rainbow',
+                       method='GET', credentials=self.credentials,
+                       timestamp=1356420407 + 300,
+                       nonce='',
+                       )
+        bewit = get_bewit(res)
+
+        expected = '123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\'
+        eq_(b64decode(bewit).decode('ascii'), expected)
+
+    def test_bewit_with_binary_id(self):
+        # Check for exceptions in get_bewit call with binary id
+        binary_credentials = self.credentials.copy()
+        binary_credentials['id'] = binary_credentials['id'].encode('ascii')
+        res = Resource(url='https://example.com/somewhere/over/the/rainbow',
+                       method='GET', credentials=binary_credentials,
+                       timestamp=1356420407 + 300,
+                       nonce='',
+                       )
+        get_bewit(res)
+
+    def test_bewit_with_ext(self):
+        res = Resource(url='https://example.com/somewhere/over/the/rainbow',
+                       method='GET', credentials=self.credentials,
+                       timestamp=1356420407 + 300,
+                       nonce='',
+                       ext='xandyandz'
+                       )
+        bewit = get_bewit(res)
+
+        expected = '123456\\1356420707\\kscxwNR2tJpP1T1zDLNPbB5UiKIU9tOSJXTUdG7X9h8=\\xandyandz'
+        eq_(b64decode(bewit).decode('ascii'), expected)
+
+    def test_bewit_with_ext_and_backslashes(self):
+        credentials = self.credentials
+        credentials['id'] = '123\\456'
+        res = Resource(url='https://example.com/somewhere/over/the/rainbow',
+                       method='GET', credentials=self.credentials,
+                       timestamp=1356420407 + 300,
+                       nonce='',
+                       ext='xand\\yandz'
+                       )
+        bewit = get_bewit(res)
+
+        expected = '123456\\1356420707\\b82LLIxG5UDkaChLU953mC+SMrbniV1sb8KiZi9cSsc=\\xand\\yandz'
+        eq_(b64decode(bewit).decode('ascii'), expected)
+
+    def test_bewit_with_port(self):
+        res = Resource(url='https://example.com:8080/somewhere/over/the/rainbow',
+                       method='GET', credentials=self.credentials,
+                       timestamp=1356420407 + 300, nonce='', ext='xandyandz')
+        bewit = get_bewit(res)
+
+        expected = '123456\\1356420707\\hZbJ3P2cKEo4ky0C8jkZAkRyCZueg4WSNbxV7vq3xHU=\\xandyandz'
+        eq_(b64decode(bewit).decode('ascii'), expected)
+
+    @raises(ValueError)
+    def test_bewit_with_nonce(self):
+        res = Resource(url='https://example.com/somewhere/over/the/rainbow',
+                       method='GET', credentials=self.credentials,
+                       timestamp=1356420407 + 300,
+                       nonce='n1')
+        get_bewit(res)
+
+    @raises(ValueError)
+    def test_bewit_invalid_method(self):
+        res = Resource(url='https://example.com:8080/somewhere/over/the/rainbow',
+                       method='POST', credentials=self.credentials,
+                       timestamp=1356420407 + 300, nonce='')
+        get_bewit(res)
+
+    def test_strip_bewit(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        url = "https://example.com/somewhere/over/the/rainbow?bewit={bewit}".format(bewit=bewit)
+
+        raw_bewit, stripped_url = strip_bewit(url)
+        self.assertEquals(raw_bewit, bewit)
+        self.assertEquals(stripped_url, "https://example.com/somewhere/over/the/rainbow")
+
+    @raises(InvalidBewit)
+    def test_strip_url_without_bewit(self):
+        url = "https://example.com/somewhere/over/the/rainbow"
+        strip_bewit(url)
+
+    def test_parse_bewit(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        bewit = parse_bewit(bewit)
+        self.assertEquals(bewit.id, '123456')
+        self.assertEquals(bewit.expiration, '1356420707')
+        self.assertEquals(bewit.mac, 'IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=')
+        self.assertEquals(bewit.ext, '')
+
+    def test_parse_bewit_with_ext(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\xandyandz'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        bewit = parse_bewit(bewit)
+        self.assertEquals(bewit.id, '123456')
+        self.assertEquals(bewit.expiration, '1356420707')
+        self.assertEquals(bewit.mac, 'IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=')
+        self.assertEquals(bewit.ext, 'xandyandz')
+
+    def test_parse_bewit_with_ext_and_backslashes(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\xand\\yandz'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        bewit = parse_bewit(bewit)
+        self.assertEquals(bewit.id, '123456')
+        self.assertEquals(bewit.expiration, '1356420707')
+        self.assertEquals(bewit.mac, 'IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=')
+        self.assertEquals(bewit.ext, 'xand\\yandz')
+
+    @raises(InvalidBewit)
+    def test_parse_invalid_bewit_with_only_one_part(self):
+        bewit = b'12345'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        bewit = parse_bewit(bewit)
+
+    @raises(InvalidBewit)
+    def test_parse_invalid_bewit_with_only_two_parts(self):
+        bewit = b'1\\2'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        bewit = parse_bewit(bewit)
+
+    def test_validate_bewit(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        url = "https://example.com/somewhere/over/the/rainbow?bewit={bewit}".format(bewit=bewit)
+        credential_lookup = self.make_credential_lookup({
+            self.credentials['id']: self.credentials,
+        })
+        self.assertTrue(check_bewit(url, credential_lookup=credential_lookup, now=1356420407 + 10))
+
+    def test_validate_bewit_with_ext(self):
+        bewit = b'123456\\1356420707\\kscxwNR2tJpP1T1zDLNPbB5UiKIU9tOSJXTUdG7X9h8=\\xandyandz'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        url = "https://example.com/somewhere/over/the/rainbow?bewit={bewit}".format(bewit=bewit)
+        credential_lookup = self.make_credential_lookup({
+            self.credentials['id']: self.credentials,
+        })
+        self.assertTrue(check_bewit(url, credential_lookup=credential_lookup, now=1356420407 + 10))
+
+    def test_validate_bewit_with_ext_and_backslashes(self):
+        bewit = b'123456\\1356420707\\b82LLIxG5UDkaChLU953mC+SMrbniV1sb8KiZi9cSsc=\\xand\\yandz'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        url = "https://example.com/somewhere/over/the/rainbow?bewit={bewit}".format(bewit=bewit)
+        credential_lookup = self.make_credential_lookup({
+            self.credentials['id']: self.credentials,
+        })
+        self.assertTrue(check_bewit(url, credential_lookup=credential_lookup, now=1356420407 + 10))
+
+    @raises(TokenExpired)
+    def test_validate_expired_bewit(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        url = "https://example.com/somewhere/over/the/rainbow?bewit={bewit}".format(bewit=bewit)
+        credential_lookup = self.make_credential_lookup({
+            self.credentials['id']: self.credentials,
+        })
+        check_bewit(url, credential_lookup=credential_lookup, now=1356420407 + 1000)
+
+    @raises(CredentialsLookupError)
+    def test_validate_bewit_with_unknown_credentials(self):
+        bewit = b'123456\\1356420707\\IGYmLgIqLrCe8CxvKPs4JlWIA+UjWJJouwgARiVhCAg=\\'
+        bewit = urlsafe_b64encode(bewit).decode('ascii')
+        url = "https://example.com/somewhere/over/the/rainbow?bewit={bewit}".format(bewit=bewit)
+        credential_lookup = self.make_credential_lookup({
+            'other_id': self.credentials,
+        })
+        check_bewit(url, credential_lookup=credential_lookup, now=1356420407 + 10)
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/mohawk/util.py
@@ -0,0 +1,267 @@
+from base64 import b64encode, urlsafe_b64encode
+import calendar
+import hashlib
+import hmac
+import logging
+import math
+import os
+import pprint
+import re
+import sys
+import time
+
+import six
+
+from .exc import (
+    BadHeaderValue,
+    HawkFail,
+    InvalidCredentials)
+
+
+HAWK_VER = 1
+log = logging.getLogger(__name__)
+allowable_header_keys = set(['id', 'ts', 'tsm', 'nonce', 'hash',
+                             'error', 'ext', 'mac', 'app', 'dlg'])
+
+
+def validate_credentials(creds):
+    if not hasattr(creds, '__getitem__'):
+        raise InvalidCredentials('credentials must be a dict-like object')
+    try:
+        creds['id']
+        creds['key']
+        creds['algorithm']
+    except KeyError:
+        etype, val, tb = sys.exc_info()
+        raise InvalidCredentials('{etype}: {val}'
+                                 .format(etype=etype, val=val))
+
+
+def random_string(length):
+    """Generates a random string for a given length."""
+    # this conservatively gets 8*length bits and then returns 6*length of
+    # them. Grabbing (6/8)*length bits could lose some entropy off the ends.
+    return urlsafe_b64encode(os.urandom(length))[:length]
+
+
+def calculate_payload_hash(payload, algorithm, content_type):
+    """Calculates a hash for a given payload."""
+    p_hash = hashlib.new(algorithm)
+
+    parts = []
+    parts.append('hawk.' + str(HAWK_VER) + '.payload\n')
+    parts.append(parse_content_type(content_type) + '\n')
+    parts.append(payload or '')
+    parts.append('\n')
+
+    for i, p in enumerate(parts):
+        # Make sure we are about to hash binary strings.
+        if not isinstance(p, six.binary_type):
+            p = p.encode('utf8')
+        p_hash.update(p)
+        parts[i] = p
+
+    log.debug('calculating payload hash from:\n{parts}'
+              .format(parts=pprint.pformat(parts)))
+
+    return b64encode(p_hash.digest())
+
+
+def calculate_mac(mac_type, resource, content_hash):
+    """Calculates a message authorization code (MAC)."""
+    normalized = normalize_string(mac_type, resource, content_hash)
+    log.debug(u'normalized resource for mac calc: {norm}'
+              .format(norm=normalized))
+    digestmod = getattr(hashlib, resource.credentials['algorithm'])
+
+    # Make sure we are about to hash binary strings.
+
+    if not isinstance(normalized, six.binary_type):
+        normalized = normalized.encode('utf8')
+    key = resource.credentials['key']
+    if not isinstance(key, six.binary_type):
+        key = key.encode('ascii')
+
+    result = hmac.new(key, normalized, digestmod)
+    return b64encode(result.digest())
+
+
+def calculate_ts_mac(ts, credentials):
+    """Calculates a message authorization code (MAC) for a timestamp."""
+    normalized = ('hawk.{hawk_ver}.ts\n{ts}\n'
+                  .format(hawk_ver=HAWK_VER, ts=ts))
+    log.debug(u'normalized resource for ts mac calc: {norm}'
+              .format(norm=normalized))
+    digestmod = getattr(hashlib, credentials['algorithm'])
+
+    if not isinstance(normalized, six.binary_type):
+        normalized = normalized.encode('utf8')
+    key = credentials['key']
+    if not isinstance(key, six.binary_type):
+        key = key.encode('ascii')
+
+    result = hmac.new(key, normalized, digestmod)
+    return b64encode(result.digest())
+
+
+def normalize_string(mac_type, resource, content_hash):
+    """Serializes mac_type and resource into a HAWK string."""
+
+    normalized = [
+        'hawk.' + str(HAWK_VER) + '.' + mac_type,
+        normalize_header_attr(resource.timestamp),
+        normalize_header_attr(resource.nonce),
+        normalize_header_attr(resource.method or ''),
+        normalize_header_attr(resource.name or ''),
+        normalize_header_attr(resource.host),
+        normalize_header_attr(resource.port),
+        normalize_header_attr(content_hash or '')
+    ]
+
+    # The blank lines are important. They follow what the Node Hawk lib does.
+
+    normalized.append(normalize_header_attr(resource.ext or ''))
+
+    if resource.app:
+        normalized.append(normalize_header_attr(resource.app))
+        normalized.append(normalize_header_attr(resource.dlg or ''))
+
+    # Add trailing new line.
+    normalized.append('')
+
+    normalized = '\n'.join(normalized)
+
+    return normalized
+
+
+def parse_content_type(content_type):
+    """Cleans up content_type."""
+    if content_type:
+        return content_type.split(';')[0].strip().lower()
+    else:
+        return ''
+
+
+def parse_authorization_header(auth_header):
+    """
+    Example Authorization header:
+
+        'Hawk id="dh37fgj492je", ts="1367076201", nonce="NPHgnG", ext="and
+        welcome!", mac="CeWHy4d9kbLGhDlkyw2Nh3PJ7SDOdZDa267KH4ZaNMY="'
+    """
+    attributes = {}
+
+    # Make sure we have a unicode object for consistency.
+    if isinstance(auth_header, six.binary_type):
+        auth_header = auth_header.decode('utf8')
+
+    parts = auth_header.split(',')
+    auth_scheme_parts = parts[0].split(' ')
+    if 'hawk' != auth_scheme_parts[0].lower():
+        raise HawkFail("Unknown scheme '{scheme}' when parsing header"
+                       .format(scheme=auth_scheme_parts[0].lower()))
+
+    # Replace 'Hawk key: value' with 'key: value'
+    # which matches the rest of parts
+    parts[0] = auth_scheme_parts[1]
+
+    for part in parts:
+        attr_parts = part.split('=')
+        key = attr_parts[0].strip()
+        if key not in allowable_header_keys:
+            raise HawkFail("Unknown Hawk key '{key}' when parsing header"
+                           .format(key=key))
+
+        if len(attr_parts) > 2:
+            attr_parts[1] = '='.join(attr_parts[1:])
+
+        # Chop of quotation marks
+        value = attr_parts[1]
+
+        if attr_parts[1].find('"') == 0:
+            value = attr_parts[1][1:]
+
+        if value.find('"') > -1:
+            value = value[0:-1]
+
+        validate_header_attr(value, name=key)
+        value = unescape_header_attr(value)
+        attributes[key] = value
+
+    log.debug('parsed Hawk header: {header} into: \n{parsed}'
+              .format(header=auth_header, parsed=pprint.pformat(attributes)))
+    return attributes
+
+
+def strings_match(a, b):
+    # Constant time string comparision, mitigates side channel attacks.
+    if len(a) != len(b):
+        return False
+    result = 0
+
+    def byte_ints(buf):
+        for ch in buf:
+            # In Python 3, if we have a bytes object, iterating it will
+            # already get the integer value. In older pythons, we need
+            # to use ord().
+            if not isinstance(ch, int):
+                ch = ord(ch)
+            yield ch
+
+    for x, y in zip(byte_ints(a), byte_ints(b)):
+        result |= x ^ y
+    return result == 0
+
+
+def utc_now(offset_in_seconds=0.0):
+    # TODO: add support for SNTP server? See ntplib module.
+    return int(math.floor(calendar.timegm(time.gmtime()) +
+                          float(offset_in_seconds)))
+
+
+# Allowed value characters:
+# !#$%&'()*+,-./:;<=>?@[]^_`{|}~ and space, a-z, A-Z, 0-9, \, "
+_header_attribute_chars = re.compile(
+    r"^[ a-zA-Z0-9_\!#\$%&'\(\)\*\+,\-\./\:;<\=>\?@\[\]\^`\{\|\}~\"\\]*$")
+
+
+def validate_header_attr(val, name=None):
+    if not _header_attribute_chars.match(val):
+        raise BadHeaderValue('header value name={name} value={val} '
+                             'contained an illegal character'
+                             .format(name=name or '?', val=repr(val)))
+
+
+def escape_header_attr(val):
+
+    # Ensure we are working with Unicode for consistency.
+    if isinstance(val, six.binary_type):
+        val = val.decode('utf8')
+
+    # Escape quotes and slash like the hawk reference code.
+    val = val.replace('\\', '\\\\')
+    val = val.replace('"', '\\"')
+    val = val.replace('\n', '\\n')
+    return val
+
+
+def unescape_header_attr(val):
+    # Un-do the hawk escaping.
+    val = val.replace('\\n', '\n')
+    val = val.replace('\\\\', '\\').replace('\\"', '"')
+    return val
+
+
+def prepare_header_val(val):
+    val = escape_header_attr(val)
+    validate_header_attr(val)
+    return val
+
+
+def normalize_header_attr(val):
+    if not val:
+        val = ''
+
+    # Normalize like the hawk reference code.
+    val = escape_header_attr(val)
+    return val
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/setup.cfg
@@ -0,0 +1,5 @@
+[egg_info]
+tag_build = 
+tag_date = 0
+tag_svn_revision = 0
+
new file mode 100644
--- /dev/null
+++ b/third_party/python/mohawk/setup.py
@@ -0,0 +1,25 @@
+from setuptools import setup, find_packages
+
+
+setup(name='mohawk',
+      version='0.3.4',
+      description="Library for Hawk HTTP authorization",
+      long_description='',
+      author='Kumar McMillan, Austin King',
+      author_email='kumar.mcmillan@gmail.com',
+      license='MPL 2.0 (Mozilla Public License)',
+      url='https://github.com/kumar303/mohawk',
+      include_package_data=True,
+      classifiers=[
+          'Intended Audience :: Developers',
+          'Natural Language :: English',
+          'Operating System :: OS Independent',
+          'Programming Language :: Python :: 2',
+          'Programming Language :: Python :: 3',
+          'Programming Language :: Python :: 2.6',
+          'Programming Language :: Python :: 2.7',
+          'Programming Language :: Python :: 3.3',
+          'Topic :: Internet :: WWW/HTTP',
+      ],
+      packages=find_packages(exclude=['tests']),
+      install_requires=['six'])
--- a/third_party/python/moz.build
+++ b/third_party/python/moz.build
@@ -39,16 +39,19 @@ with Files('jsmin/**'):
     BUG_COMPONENT = ('Firefox for Android', 'Build Config & IDE Support')
 
 with Files('lldbutils/**'):
     BUG_COMPONENT = ('Core', 'General')
 
 with Files('mock-1.0.0/**'):
     BUG_COMPONENT = ('Firefox Build System', 'General')
 
+with Files('mohawk/**'):
+    BUG_COMPONENT = ('Taskcluster', 'Platform Libraries')
+
 with Files('mozilla-version/**'):
     BUG_COMPONENT = ('Release Engineering', 'General Automation')
 
 with Files('psutil/**'):
     BUG_COMPONENT = ('Firefox Build System', 'General')
 
 with Files('py/**'):
     BUG_COMPONENT = ('Firefox Build System', 'General')
@@ -84,16 +87,19 @@ with Files('requirements.*'):
     BUG_COMPONENT = ('Firefox Build System', 'General')
 
 with Files('rsa/**'):
     BUG_COMPONENT = ('Core', 'Security: PSM')
 
 with Files('slugid/**'):
     BUG_COMPONENT = ('Taskcluster', 'Platform Libraries')
 
+with Files('taskcluster/**'):
+    BUG_COMPONENT = ('Taskcluster', 'Platform Libraries')
+
 with Files('virtualenv/**'):
     BUG_COMPONENT = ('Firefox Build System', 'General')
 
 with Files('voluptuous/**'):
     BUG_COMPONENT = ('Firefox Build System', 'Task Configuration')
 
 with Files('which/**'):
     BUG_COMPONENT = ('Firefox Build System', 'General')
--- a/third_party/python/requirements.in
+++ b/third_party/python/requirements.in
@@ -6,10 +6,11 @@ mozilla-version==0.3.0
 pathlib2==2.3.2
 pip-tools==3.0.0
 pipenv==2018.5.18
 psutil==5.4.3
 pytest==3.6.2
 python-hglib==2.4
 requests==2.9.1
 six==1.10.0
+taskcluster==4.0.1
 virtualenv==15.2.0
 voluptuous==0.11.5
--- a/third_party/python/requirements.txt
+++ b/third_party/python/requirements.txt
@@ -26,16 +26,20 @@ enum34==1.1.6 \
 funcsigs==1.0.2 \
     --hash=sha256:330cc27ccbf7f1e992e69fef78261dc7c6569012cf397db8d3de0234e6c937ca \
     --hash=sha256:a7bb0f2cf3a3fd1ab2732cb49eba4252c2af4240442415b4abce3b87022a8f50 \
     # via pytest
 jsmin==2.1.0 \
     --hash=sha256:5d07bf0251a4128e5e8e8eef603849b6b5741c337bff087731a248f9cc774f56
 json-e==2.7.0 \
     --hash=sha256:d8c1ec3f5bbc7728c3a504ebe58829f283c64eca230871e4eefe974b4cdaae4a
+mohawk==0.3.4 \
+    --hash=sha256:b3f85ffa93a5c7d2f9cc591246ef9f8ac4a9fa716bfd5bae0377699a2d89d78c \
+    --hash=sha256:e98b331d9fa9ece7b8be26094cbe2d57613ae882133cc755167268a984bc0ab3 \
+    # via taskcluster
 more-itertools==4.3.0 \
     --hash=sha256:c187a73da93e7a8acc0001572aebc7e3c69daf7bf6881a2cea10650bd4420092 \
     --hash=sha256:c476b5d3a34e12d40130bc2f935028b5f636df8f372dc2c1c01dc19681b2039e \
     --hash=sha256:fcbfeaea0be121980e15bc97b3817b5202ca73d0eae185b4550cbfce2a3ebb3d \
     # via pytest
 mozilla-version==0.3.0 \
     --hash=sha256:97f428f6a87f1a0569e03c39e446eeed87c3ec5d8300319d41e8348ef832e8ea
 pathlib2==2.3.2 \
@@ -85,16 +89,23 @@ scandir==1.9.0 \
     --hash=sha256:c14701409f311e7a9b7ec8e337f0815baf7ac95776cc78b419a1e6d49889a383 \
     --hash=sha256:c7708f29d843fc2764310732e41f0ce27feadde453261859ec0fca7865dfc41b \
     --hash=sha256:c9009c527929f6e25604aec39b0a43c3f831d2947d89d6caaab22f057b7055c8 \
     --hash=sha256:f5c71e29b4e2af7ccdc03a020c626ede51da471173b4a6ad1e904f2b2e04b4bd \
     # via pathlib2
 six==1.10.0 \
     --hash=sha256:0ff78c403d9bccf5a425a6d31a12aa6b47f1c21ca4dc2573a7e2f32a97335eb1 \
     --hash=sha256:105f8d68616f8248e24bf0e9372ef04d3cc10104f1980f54d57b2ce73a5ad56a
+slugid==1.0.7 \
+    --hash=sha256:6dab3c7eef0bb423fb54cb7752e0f466ddd0ee495b78b763be60e8a27f69e779 \
+    # via taskcluster
+taskcluster==4.0.1 \
+    --hash=sha256:27256511044346ac71a495d3c636f2add95c102b9b09f90d6fb1ea3e9949d311 \
+    --hash=sha256:99dd90bc1c566968868c8b07ede32f8e031cbccd52c7195a61e802679d461447 \
+    --hash=sha256:d0360063c1a3fcaaa514bb31c03954ba573d2b671df40a2ecfdfd9339cc8e93e
 virtualenv-clone==0.3.0 \
     --hash=sha256:4507071d81013fd03ea9930ec26bc8648b997927a11fa80e8ee81198b57e0ac7 \
     --hash=sha256:b5cfe535d14dc68dfc1d1bb4ac1209ea28235b91156e2bba8e250d291c3fb4f8 \
     # via pipenv
 virtualenv==15.2.0 \
     --hash=sha256:1d7e241b431e7afce47e77f8843a276f652699d1fa4f93b9d8ce0076fd7b0b54 \
     --hash=sha256:e8e05d4714a1c51a2f5921e62f547fcb0f713ebbe959e0a7f585cc8bef71d11f
 voluptuous==0.11.5 \
deleted file mode 100644
--- a/third_party/python/slugid/.gitignore
+++ /dev/null
@@ -1,57 +0,0 @@
-# Byte-compiled / optimized / DLL files
-__pycache__/
-*.py[cod]
-
-# C extensions
-*.so
-
-# Distribution / packaging
-.Python
-env/
-build/
-develop-eggs/
-dist/
-downloads/
-eggs/
-.eggs/
-lib/
-lib64/
-parts/
-sdist/
-var/
-*.egg-info/
-.installed.cfg
-*.egg
-
-# PyInstaller
-#  Usually these files are written by a python script from a template
-#  before PyInstaller builds the exe, so as to inject date/other infos into it.
-*.manifest
-*.spec
-
-# Installer logs
-pip-log.txt
-pip-delete-this-directory.txt
-
-# Unit test / coverage reports
-htmlcov/
-.tox/
-.coverage
-.coverage.*
-.cache
-nosetests.xml
-coverage.xml
-*,cover
-
-# Translations
-*.mo
-*.pot
-
-# Django stuff:
-*.log
-
-# Sphinx documentation
-docs/_build/
-
-# PyBuilder
-target/
deleted file mode 100644
--- a/third_party/python/slugid/.travis.yml
+++ /dev/null
@@ -1,27 +0,0 @@
-language: python
-python:
-  - 2.7
-
-install:
-  - pip install -r requirements.txt
-
-script:
-  - tox
-
-after_script:
-  - tox -e coveralls
-
-# currently cannot customise per user fork, see:
-# https://github.com/travis-ci/travis-ci/issues/1094
-# please comment out this section in your personal fork!
-notifications:
-  irc:
-    channels:
-      - "irc.mozilla.org#taskcluster-bots"
-    on_success: always
-    on_failure: always
-    template:
-      - "\x02%{repository}\x0314#%{build_number}\x03\x02 (%{branch} - %{commit} : %{author}): \x02\x0312%{message}\x02\x03"
-      - "\x02Change view\x02 : \x0314%{compare_url}\x03"
-      - "\x02Build details\x02 : \x0314%{build_url}\x03"
-      - "\x02Commit message\x02 : \x0314%{commit_message}\x03"
deleted file mode 100644
--- a/third_party/python/slugid/LICENSE
+++ /dev/null
@@ -1,363 +0,0 @@
-Mozilla Public License, version 2.0
-
-1. Definitions
-
-1.1. "Contributor"
-
-     means each individual or legal entity that creates, contributes to the
-     creation of, or owns Covered Software.
-
-1.2. "Contributor Version"
-
-     means the combination of the Contributions of others (if any) used by a
-     Contributor and that particular Contributor's Contribution.
-
-1.3. "Contribution"
-
-     means Covered Software of a particular Contributor.
-
-1.4. "Covered Software"
-
-     means Source Code Form to which the initial Contributor has attached the
-     notice in Exhibit A, the Executable Form of such Source Code Form, and
-     Modifications of such Source Code Form, in each case including portions
-     thereof.
-
-1.5. "Incompatible With Secondary Licenses"
-     means
-
-     a. that the initial Contributor has attached the notice described in
-        Exhibit B to the Covered Software; or
-
-     b. that the Covered Software was made available under the terms of
-        version 1.1 or earlier of the License, but not also under the terms of
-        a Secondary License.
-
-1.6. "Executable Form"
-
-     means any form of the work other than Source Code Form.
-
-1.7. "Larger Work"
-
-     means a work that combines Covered Software with other material, in a
-     separate file or files, that is not Covered Software.
-
-1.8. "License"
-
-     means this document.
-
-1.9. "Licensable"
-
-     means having the right to grant, to the maximum extent possible, whether
-     at the time of the initial grant or subsequently, any and all of the
-     rights conveyed by this License.
-
-1.10. "Modifications"
-
-     means any of the following:
-
-     a. any file in Source Code Form that results from an addition to,
-        deletion from, or modification of the contents of Covered Software; or
-
-     b. any new file in Source Code Form that contains any Covered Software.
-
-1.11. "Patent Claims" of a Contributor
-
-      means any patent claim(s), including without limitation, method,
-      process, and apparatus claims, in any patent Licensable by such
-      Contributor that would be infringed, but for the grant of the License,
-      by the making, using, selling, offering for sale, having made, import,
-      or transfer of either its Contributions or its Contributor Version.
-
-1.12. "Secondary License"
-
-      means either the GNU General Public License, Version 2.0, the GNU Lesser
-      General Public License, Version 2.1, the GNU Affero General Public
-      License, Version 3.0, or any later versions of those licenses.
-
-1.13. "Source Code Form"
-
-      means the form of the work preferred for making modifications.
-
-1.14. "You" (or "Your")
-
-      means an individual or a legal entity exercising rights under this
-      License. For legal entities, "You" includes any entity that controls, is
-      controlled by, or is under common control with You. For purposes of this
-      definition, "control" means (a) the power, direct or indirect, to cause
-      the direction or management of such entity, whether by contract or
-      otherwise, or (b) ownership of more than fifty percent (50%) of the
-      outstanding shares or beneficial ownership of such entity.
-
-
-2. License Grants and Conditions
-
-2.1. Grants
-
-     Each Contributor hereby grants You a world-wide, royalty-free,
-     non-exclusive license:
-
-     a. under intellectual property rights (other than patent or trademark)
-        Licensable by such Contributor to use, reproduce, make available,
-        modify, display, perform, distribute, and otherwise exploit its
-        Contributions, either on an unmodified basis, with Modifications, or
-        as part of a Larger Work; and
-
-     b. under Patent Claims of such Contributor to make, use, sell, offer for
-        sale, have made, import, and otherwise transfer either its
-        Contributions or its Contributor Version.
-
-2.2. Effective Date
-
-     The licenses granted in Section 2.1 with respect to any Contribution
-     become effective for each Contribution on the date the Contributor first
-     distributes such Contribution.
-
-2.3. Limitations on Grant Scope
-
-     The licenses granted in this Section 2 are the only rights granted under
-     this License. No additional rights or licenses will be implied from the
-     distribution or licensing of Covered Software under this License.
-     Notwithstanding Section 2.1(b) above, no patent license is granted by a
-     Contributor:
-
-     a. for any code that a Contributor has removed from Covered Software; or
-
-     b. for infringements caused by: (i) Your and any other third party's
-        modifications of Covered Software, or (ii) the combination of its
-        Contributions with other software (except as part of its Contributor
-        Version); or
-
-     c. under Patent Claims infringed by Covered Software in the absence of
-        its Contributions.
-
-     This License does not grant any rights in the trademarks, service marks,
-     or logos of any Contributor (except as may be necessary to comply with
-     the notice requirements in Section 3.4).
-
-2.4. Subsequent Licenses
-
-     No Contributor makes additional grants as a result of Your choice to
-     distribute the Covered Software under a subsequent version of this
-     License (see Section 10.2) or under the terms of a Secondary License (if
-     permitted under the terms of Section 3.3).
-
-2.5. Representation
-
-     Each Contributor represents that the Contributor believes its
-     Contributions are its original creation(s) or it has sufficient rights to
-     grant the rights to its Contributions conveyed by this License.
-
-2.6. Fair Use
-
-     This License is not intended to limit any rights You have under
-     applicable copyright doctrines of fair use, fair dealing, or other
-     equivalents.
-
-2.7. Conditions
-
-     Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
-     Section 2.1.
-
-
-3. Responsibilities
-
-3.1. Distribution of Source Form
-
-     All distribution of Covered Software in Source Code Form, including any
-     Modifications that You create or to which You contribute, must be under
-     the terms of this License. You must inform recipients that the Source
-     Code Form of the Covered Software is governed by the terms of this
-     License, and how they can obtain a copy of this License. You may not
-     attempt to alter or restrict the recipients' rights in the Source Code
-     Form.
-
-3.2. Distribution of Executable Form
-
-     If You distribute Covered Software in Executable Form then:
-
-     a. such Covered Software must also be made available in Source Code Form,
-        as described in Section 3.1, and You must inform recipients of the
-        Executable Form how they can obtain a copy of such Source Code Form by
-        reasonable means in a timely manner, at a charge no more than the cost
-        of distribution to the recipient; and
-
-     b. You may distribute such Executable Form under the terms of this
-        License, or sublicense it under different terms, provided that the
-        license for the Executable Form does not attempt to limit or alter the
-        recipients' rights in the Source Code Form under this License.
-
-3.3. Distribution of a Larger Work
-
-     You may create and distribute a Larger Work under terms of Your choice,
-     provided that You also comply with the requirements of this License for
-     the Covered Software. If the Larger Work is a combination of Covered
-     Software with a work governed by one or more Secondary Licenses, and the
-     Covered Software is not Incompatible With Secondary Licenses, this
-     License permits You to additionally distribute such Covered Software
-     under the terms of such Secondary License(s), so that the recipient of
-     the Larger Work may, at their option, further distribute the Covered
-     Software under the terms of either this License or such Secondary
-     License(s).
-
-3.4. Notices
-
-     You may not remove or alter the substance of any license notices
-     (including copyright notices, patent notices, disclaimers of warranty, or
-     limitations of liability) contained within the Source Code Form of the
-     Covered Software, except that You may alter any license notices to the
-     extent required to remedy known factual inaccuracies.
-
-3.5. Application of Additional Terms
-
-     You may choose to offer, and to charge a fee for, warranty, support,
-     indemnity or liability obligations to one or more recipients of Covered
-     Software. However, You may do so only on Your own behalf, and not on
-     behalf of any Contributor. You must make it absolutely clear that any
-     such warranty, support, indemnity, or liability obligation is offered by
-     You alone, and You hereby agree to indemnify every Contributor for any
-     liability incurred by such Contributor as a result of warranty, support,
-     indemnity or liability terms You offer. You may include additional
-     disclaimers of warranty and limitations of liability specific to any
-     jurisdiction.
-
-4. Inability to Comply Due to Statute or Regulation
-
-   If it is impossible for You to comply with any of the terms of this License
-   with respect to some or all of the Covered Software due to statute,
-   judicial order, or regulation then You must: (a) comply with the terms of
-   this License to the maximum extent possible; and (b) describe the
-   limitations and the code they affect. Such description must be placed in a
-   text file included with all distributions of the Covered Software under
-   this License. Except to the extent prohibited by statute or regulation,
-   such description must be sufficiently detailed for a recipient of ordinary
-   skill to be able to understand it.
-
-5. Termination
-
-5.1. The rights granted under this License will terminate automatically if You
-     fail to comply with any of its terms. However, if You become compliant,
-     then the rights granted under this License from a particular Contributor
-     are reinstated (a) provisionally, unless and until such Contributor
-     explicitly and finally terminates Your grants, and (b) on an ongoing
-     basis, if such Contributor fails to notify You of the non-compliance by
-     some reasonable means prior to 60 days after You have come back into
-     compliance. Moreover, Your grants from a particular Contributor are
-     reinstated on an ongoing basis if such Contributor notifies You of the
-     non-compliance by some reasonable means, this is the first time You have
-     received notice of non-compliance with this License from such
-     Contributor, and You become compliant prior to 30 days after Your receipt
-     of the notice.
-
-5.2. If You initiate litigation against any entity by asserting a patent
-     infringement claim (excluding declaratory judgment actions,
-     counter-claims, and cross-claims) alleging that a Contributor Version
-     directly or indirectly infringes any patent, then the rights granted to
-     You by any and all Contributors for the Covered Software under Section
-     2.1 of this License shall terminate.
-
-5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
-     license agreements (excluding distributors and resellers) which have been
-     validly granted by You or Your distributors under this License prior to
-     termination shall survive termination.
-
-6. Disclaimer of Warranty
-
-   Covered Software is provided under this License on an "as is" basis,
-   without warranty of any kind, either expressed, implied, or statutory,
-   including, without limitation, warranties that the Covered Software is free
-   of defects, merchantable, fit for a particular purpose or non-infringing.
-   The entire risk as to the quality and performance of the Covered Software
-   is with You. Should any Covered Software prove defective in any respect,
-   You (not any Contributor) assume the cost of any necessary servicing,
-   repair, or correction. This disclaimer of warranty constitutes an essential
-   part of this License. No use of  any Covered Software is authorized under
-   this License except under this disclaimer.
-
-7. Limitation of Liability
-
-   Under no circumstances and under no legal theory, whether tort (including
-   negligence), contract, or otherwise, shall any Contributor, or anyone who
-   distributes Covered Software as permitted above, be liable to You for any
-   direct, indirect, special, incidental, or consequential damages of any
-   character including, without limitation, damages for lost profits, loss of
-   goodwill, work stoppage, computer failure or malfunction, or any and all
-   other commercial damages or losses, even if such party shall have been
-   informed of the possibility of such damages. This limitation of liability
-   shall not apply to liability for death or personal injury resulting from
-   such party's negligence to the extent applicable law prohibits such
-   limitation. Some jurisdictions do not allow the exclusion or limitation of
-   incidental or consequential damages, so this exclusion and limitation may
-   not apply to You.
-
-8. Litigation
-
-   Any litigation relating to this License may be brought only in the courts
-   of a jurisdiction where the defendant maintains its principal place of
-   business and such litigation shall be governed by laws of that
-   jurisdiction, without reference to its conflict-of-law provisions. Nothing
-   in this Section shall prevent a party's ability to bring cross-claims or
-   counter-claims.
-
-9. Miscellaneous
-
-   This License represents the complete agreement concerning the subject
-   matter hereof. If any provision of this License is held to be
-   unenforceable, such provision shall be reformed only to the extent
-   necessary to make it enforceable. Any law or regulation which provides that
-   the language of a contract shall be construed against the drafter shall not
-   be used to construe this License against a Contributor.
-
-
-10. Versions of the License
-
-10.1. New Versions
-
-      Mozilla Foundation is the license steward. Except as provided in Section
-      10.3, no one other than the license steward has the right to modify or
-      publish new versions of this License. Each version will be given a
-      distinguishing version number.
-
-10.2. Effect of New Versions
-
-      You may distribute the Covered Software under the terms of the version
-      of the License under which You originally received the Covered Software,
-      or under the terms of any subsequent version published by the license
-      steward.
-
-10.3. Modified Versions
-
-      If you create software not governed by this License, and you want to
-      create a new license for such software, you may create and use a
-      modified version of this License if you rename the license and remove
-      any references to the name of the license steward (except to note that
-      such modified license differs from this License).
-
-10.4. Distributing Source Code Form that is Incompatible With Secondary
-      Licenses If You choose to distribute Source Code Form that is
-      Incompatible With Secondary Licenses under the terms of this version of
-      the License, the notice described in Exhibit B of this License must be
-      attached.
-
-Exhibit A - Source Code Form License Notice
-
-      This Source Code Form is subject to the
-      terms of the Mozilla Public License, v.
-      2.0. If a copy of the MPL was not
-      distributed with this file, You can
-      obtain one at
-      http://mozilla.org/MPL/2.0/.
-
-If it is not possible or desirable to put the notice in a particular file,
-then You may include the notice in a location (such as a LICENSE file in a
-relevant directory) where a recipient would be likely to look for such a
-notice.
-
-You may add additional accurate notices of copyright ownership.
-
-Exhibit B - "Incompatible With Secondary Licenses" Notice
-
-      This Source Code Form is "Incompatible
-      With Secondary Licenses", as defined by
-      the Mozilla Public License, v. 2.0.
-
deleted file mode 100644
--- a/third_party/python/slugid/requirements.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-tox
-twine
deleted file mode 100644
--- a/third_party/python/slugid/test.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# Licensed under the Mozilla Public Licence 2.0.
-# https://www.mozilla.org/en-US/MPL/2.0
-
-import uuid
-import slugid
-
-
-def testEncode():
-    """ Test that we can correctly encode a "non-nice" uuid (with first bit
-    set) to its known slug. The specific uuid was chosen since it has a slug
-    which contains both `-` and `_` characters."""
-
-    # 10000000010011110011111111001000110111111100101101001011000001101000100111111011101011101111101011010101111000011000011101010100....
-    # <8 ><0 ><4 ><f ><3 ><f ><c ><8 ><d ><f ><c ><b ><4 ><b ><0 ><6 ><8 ><9 ><f ><b ><a ><e ><f ><a ><d ><5 ><e ><1 ><8 ><7 ><5 ><4 >
-    # < g  >< E  >< 8  >< _  >< y  >< N  >< _  >< L  >< S  >< w  >< a  >< J  >< -  >< 6  >< 7  >< 6  >< 1  >< e  >< G  >< H  >< V  >< A  >
-    uuid_ = uuid.UUID('{804f3fc8-dfcb-4b06-89fb-aefad5e18754}')
-    expectedSlug = 'gE8_yN_LSwaJ-6761eGHVA'
-    actualSlug = slugid.encode(uuid_)
-
-    assert expectedSlug == actualSlug, "UUID not correctly encoded into slug: '" + expectedSlug + "' != '" + actualSlug + "'"
-
-
-def testDecode():
-    """ Test that we can decode a "non-nice" slug (first bit of uuid is set)
-    that begins with `-`"""
-
-    # 11111011111011111011111011111011111011111011111001000011111011111011111111111111111111111111111111111111111111111111111111111101....
-    # <f ><b ><e ><f ><b ><e ><f ><b ><e ><f ><b ><e ><4 ><3 ><e ><f ><b ><f ><f ><f ><f ><f ><f ><f ><f ><f ><f ><f ><f ><f ><f ><d >
-    # < -  >< -  >< -  >< -  >< -  >< -  >< -  >< -  >< Q  >< -  >< -  >< -  >< _  >< _  >< _  >< _  >< _  >< _  >< _  >< _  >< _  >< Q  >
-    slug = '--------Q--__________Q'
-    expectedUuid = uuid.UUID('{fbefbefb-efbe-43ef-bfff-fffffffffffd}')
-    actualUuid = slugid.decode(slug)
-
-    assert expectedUuid == actualUuid, "Slug not correctly decoded into uuid: '" + str(expectedUuid) + "' != '" + str(actualUuid) + "'"
-
-
-def testUuidEncodeDecode():
-    """ Test that 10000 v4 uuids are unchanged after encoding and then decoding them"""
-
-    for i in range(0, 10000):
-        uuid1 = uuid.uuid4()
-        slug = slugid.encode(uuid1)
-        uuid2 = slugid.decode(slug)
-
-        assert uuid1 == uuid2, "Encode and decode isn't identity: '" + str(uuid1) + "' != '" + str(uuid2) + "'"
-
-
-def testSlugDecodeEncode():
-    """ Test that 10000 v4 slugs are unchanged after decoding and then encoding them."""
-
-    for i in range(0, 10000):
-        slug1 = slugid.v4()
-        uuid_ = slugid.decode(slug1)
-        slug2 = slugid.encode(uuid_)
-
-        assert slug1 == slug2, "Decode and encode isn't identity"
-
-
-def testSpreadNice():
-    """ Make sure that all allowed characters can appear in all allowed
-    positions within the "nice" slug. In this test we generate over a thousand
-    slugids, and make sure that every possible allowed character per position
-    appears at least once in the sample of all slugids generated. We also make
-    sure that no other characters appear in positions in which they are not
-    allowed.
-
-    base 64 encoding char -> value:
-    ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_
-    0         1         2         3         4         5          6
-    0123456789012345678901234567890123456789012345678901234567890123
-
-    e.g. from this we can see 'j' represents 35 in base64
-
-    The following comments show the 128 bits of the v4 uuid in binary, hex and
-    base 64 encodings. The 6 fixed bits (`0`/`1`) according to RFC 4122, plus
-    the first (most significant) fixed bit (`0`) are shown among the 121
-    arbitrary value bits (`.`/`x`). The `x` means the same as `.` but just
-    highlights which bits are grouped together for the respective encoding.
-
-    schema:
-         <..........time_low............><...time_mid...><time_hi_+_vers><clk_hi><clk_lo><.....................node.....................>
-
-    bin: 0xxx............................................0100............10xx............................................................
-    hex:  $A <01><02><03><04><05><06><07><08><09><10><11> 4  <13><14><15> $B <17><18><19><20><21><22><23><24><25><26><27><28><29><30><31>
-
-    => $A in {0, 1, 2, 3, 4, 5, 6, 7} (0b0xxx)
-    => $B in {8, 9, A, B} (0b10xx)
-
-    bin: 0xxxxx..........................................0100xx......xxxx10............................................................xx0000
-    b64:   $C  < 01 >< 02 >< 03 >< 04 >< 05 >< 06 >< 07 >  $D  < 09 >  $E  < 11 >< 12 >< 13 >< 14 >< 15 >< 16 >< 17 >< 18 >< 19 >< 20 >  $F
-
-    => $C in {A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, a, b, c, d, e, f} (0b0xxxxx)
-    => $D in {Q, R, S, T} (0b0100xx)
-    => $E in {C, G, K, O, S, W, a, e, i, m, q, u, y, 2, 6, -} (0bxxxx10)
-    => $F in {A, Q, g, w} (0bxx0000)"""
-
-    charsAll = ''.join(sorted('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_'))
-    # 0 - 31: 0b0xxxxx
-    charsC = ''.join(sorted('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef'))
-    # 16, 17, 18, 19: 0b0100xx
-    charsD = ''.join(sorted('QRST'))
-    # 2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46, 50, 54, 58, 62: 0bxxxx10
-    charsE = ''.join(sorted('CGKOSWaeimquy26-'))
-    # 0, 16, 32, 48: 0bxx0000
-    charsF = ''.join(sorted('AQgw'))
-    expected = [charsC, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsD, charsAll, charsE, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsF]
-    spreadTest(slugid.nice, expected)
-
-
-def testSpreadV4():
-    """ This test is the same as niceSpreadTest but for slugid.v4() rather than
-    slugid.nice(). The only difference is that a v4() slug can start with any of
-    the base64 characters since the first six bits of the uuid are random."""
-
-    charsAll = ''.join(sorted('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_'))
-    # 16, 17, 18, 19: 0b0100xx
-    charsD = ''.join(sorted('QRST'))
-    # 2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46, 50, 54, 58, 62: 0bxxxx10
-    charsE = ''.join(sorted('CGKOSWaeimquy26-'))
-    # 0, 16, 32, 48: 0bxx0000
-    charsF = ''.join(sorted('AQgw'))
-    expected = [charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsD, charsAll, charsE, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsAll, charsF]
-    spreadTest(slugid.v4, expected)
-
-
-def spreadTest(generator, expected):
-    """ `spreadTest` runs a test against the `generator` function, to check that
-    when calling it 64*40 times, the range of characters per string position it
-    returns matches the array `expected`, where each entry in `expected` is a
-    string of all possible characters that should appear in that position in the
-    string, at least once in the sample of 64*40 responses from the `generator`
-    function"""
-    # k is an array which stores which characters were found at which
-    # positions. It has one entry per slugid character, therefore 22 entries.
-    # Each entry is a dict with a key for each character found, and its value
-    # as the number of times that character appeared at that position in the
-    # slugid in the large sample of slugids generated in this test.
-    k = [{}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}]
-
-    # Generate a large sample of slugids, and record what characters appeared
-    # where...  A monte-carlo test has demonstrated that with 64 * 20
-    # iterations, no failure occurred in 1000 simulations, so 64 * 40 should be
-    # suitably large to rule out false positives.
-    for i in range(0, 64 * 40):
-        slug = generator()
-        assert len(slug) == 22
-        for j in range(0, 22):
-            if slug[j] in k[j]:
-                k[j][slug[j]] = k[j][slug[j]] + 1
-            else:
-                k[j][slug[j]] = 1
-
-    # Compose results into an array `actual`, for comparison with `expected`
-    actual = []
-    for j in range(0, len(k)):
-        actual.append('')
-        for a in k[j].keys():
-            if k[j][a] > 0:
-                actual[j] += a
-        # sort for easy comparison
-        actual[j] = ''.join(sorted(actual[j]))
-
-    assert arraysEqual(expected, actual), "In a large sample of generated slugids, the range of characters found per character position in the sample did not match expected results.\n\nExpected: " + str(expected) + "\n\nActual: " + str(actual)
-
-def arraysEqual(a, b):
-    """ returns True if arrays a and b are equal"""
-    return cmp(a, b) == 0
deleted file mode 100644
--- a/third_party/python/slugid/tox.ini
+++ /dev/null
@@ -1,26 +0,0 @@
-[tox]
-envlist = py27
-
-
-[base]
-deps =
-    coverage
-    nose
-    rednose
-commands =
-    coverage run --source slugid --branch {envbindir}/nosetests -v --with-xunit --rednose --force-color
-
-
-[testenv:py27]
-deps=
-    {[base]deps}
-basepython = python2.7
-commands =
-    {[base]commands}
-
-
-[testenv:coveralls]
-deps=
-    python-coveralls
-commands=
-    coveralls
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/PKG-INFO
@@ -0,0 +1,13 @@
+Metadata-Version: 1.1
+Name: taskcluster
+Version: 4.0.1
+Summary: Python client for Taskcluster
+Home-page: https://github.com/taskcluster/taskcluster-client.py
+Author: John Ford
+Author-email: jhford@mozilla.com
+License: UNKNOWN
+Description: UNKNOWN
+Platform: UNKNOWN
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Programming Language :: Python :: 3.5
+Classifier: Programming Language :: Python :: 3.6
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/README.md
@@ -0,0 +1,4266 @@
+Taskcluster Client Library in Python
+======================================
+
+[![Build Status](https://travis-ci.org/taskcluster/taskcluster-client.py.svg?branch=master)](https://travis-ci.org/taskcluster/taskcluster-client.py)
+
+This is a library used to interact with Taskcluster within Python programs.  It
+presents the entire REST API to consumers as well as being able to generate
+URLs Signed by Hawk credentials.  It can also generate routing keys for
+listening to pulse messages from Taskcluster.
+
+The library builds the REST API methods from the same [API Reference
+format](/docs/manual/design/apis/reference-format) as the
+Javascript client library.
+
+## Generating Temporary Credentials
+If you have non-temporary taskcluster credentials you can generate a set of
+temporary credentials as follows. Notice that the credentials cannot last more
+than 31 days, and you can only revoke them by revoking the credentials that was
+used to issue them (this takes up to one hour).
+
+It is not the responsibility of the caller to apply any clock drift adjustment
+to the start or expiry time - this is handled by the auth service directly.
+
+```python
+import datetime
+
+start = datetime.datetime.now()
+expiry = start + datetime.timedelta(0,60)
+scopes = ['ScopeA', 'ScopeB']
+name = 'foo'
+
+credentials = taskcluster.createTemporaryCredentials(
+    # issuing clientId
+    clientId,
+    # issuing accessToken
+    accessToken,
+    # Validity of temporary credentials starts here, in timestamp
+    start,
+    # Expiration of temporary credentials, in timestamp
+    expiry,
+    # Scopes to grant the temporary credentials
+    scopes,
+    # credential name (optional)
+    name
+)
+```
+
+You cannot use temporary credentials to issue new temporary credentials.  You
+must have `auth:create-client:<name>` to create a named temporary credential,
+but unnamed temporary credentials can be created regardless of your scopes.
+
+## API Documentation
+
+The REST API methods are documented in the [reference docs](/docs/reference).
+
+## Query-String arguments
+Query string arguments are now supported.  In order to use them, you can call
+a method like this:
+
+```python
+queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', query={'continuationToken': outcome.get('continuationToken')})
+```
+
+These query-string arguments are only supported using this calling convention
+
+## Sync vs Async
+
+The objects under `taskcluster` (e.g., `taskcluster.Queue`) are
+python2-compatible and operate synchronously.
+
+
+The objects under `taskcluster.aio` (e.g., `taskcluster.aio.Queue`) require
+`python>=3.5`. The async objects use asyncio coroutines for concurrency; this
+allows us to put I/O operations in the background, so operations that require
+the cpu can happen sooner. Given dozens of operations that can run concurrently
+(e.g., cancelling a medium-to-large task graph), this can result in significant
+performance improvements. The code would look something like
+
+```python
+#!/usr/bin/env python
+import aiohttp
+import asyncio
+from taskcluster.aio import Auth
+
+async def do_ping():
+    with aiohttp.ClientSession() as session:
+        a = Auth(session=session)
+        print(await a.ping())
+
+loop = asyncio.get_event_loop()
+loop.run_until_complete(do_ping())
+```
+
+Other async code examples are available [here](#methods-contained-in-the-client-library).
+
+Here's a slide deck for an [introduction to async python](https://gitpitch.com/escapewindow/slides-sf-2017/async-python).
+
+## Usage
+
+* Here's a simple command:
+
+    ```python
+    import taskcluster
+    index = taskcluster.Index({'credentials': {'clientId': 'id', 'accessToken': 'accessToken'}})
+    index.ping()
+    ```
+
+* There are four calling conventions for methods:
+
+    ```python
+    client.method(v1, v1, payload)
+    client.method(payload, k1=v1, k2=v2)
+    client.method(payload=payload, query=query, params={k1: v1, k2: v2})
+    client.method(v1, v2, payload=payload, query=query)
+    ```
+
+* Options for the topic exchange methods can be in the form of either a single
+  dictionary argument or keyword arguments.  Only one form is allowed
+
+    ```python
+    from taskcluster import client
+    qEvt = client.QueueEvents()
+    # The following calls are equivalent
+    qEvt.taskCompleted({'taskId': 'atask'})
+    qEvt.taskCompleted(taskId='atask')
+    ```
+
+## Pagination
+There are two ways to accomplish pagination easily with the python client.  The first is
+to implement pagination in your code:
+```python
+import taskcluster
+queue = taskcluster.Queue()
+i = 0
+tasks = 0
+outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g')
+while outcome.get('continuationToken'):
+    print('Response %d gave us %d more tasks' % (i, len(outcome['tasks'])))
+    if outcome.get('continuationToken'):
+        outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', query={'continuationToken': outcome.get('continuationToken')})
+    i += 1
+    tasks += len(outcome.get('tasks', []))
+print('Task Group %s has %d tasks' % (outcome['taskGroupId'], tasks))
+```
+
+There's also an experimental feature to support built in automatic pagination
+in the sync client.  This feature allows passing a callback as the
+'paginationHandler' keyword-argument.  This function will be passed the
+response body of the API method as its sole positional arugment.
+
+This example of the built in pagination shows how a list of tasks could be
+built and then counted:
+
+```python
+import taskcluster
+queue = taskcluster.Queue()
+
+responses = []
+
+def handle_page(y):
+    print("%d tasks fetched" % len(y.get('tasks', [])))
+    responses.append(y)
+
+queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', paginationHandler=handle_page)
+
+tasks = 0
+for response in responses:
+    tasks += len(response.get('tasks', []))
+
+print("%d requests fetch %d tasks" % (len(responses), tasks))
+```
+
+## Logging
+Logging is set up in `taskcluster/__init__.py`.  If the special
+`DEBUG_TASKCLUSTER_CLIENT` environment variable is set, the `__init__.py`
+module will set the `logging` module's level for its logger to `logging.DEBUG`
+and if there are no existing handlers, add a `logging.StreamHandler()`
+instance.  This is meant to assist those who do not wish to bother figuring out
+how to configure the python logging module but do want debug messages
+
+
+## Scopes
+The `scopeMatch(assumedScopes, requiredScopeSets)` function determines
+whether one or more of a set of required scopes are satisfied by the assumed
+scopes, taking *-expansion into account.  This is useful for making local
+decisions on scope satisfaction, but note that `assumed_scopes` must be the
+*expanded* scopes, as this function cannot perform expansion.
+
+It takes a list of a assumed scopes, and a list of required scope sets on
+disjunctive normal form, and checks if any of the required scope sets are
+satisfied.
+
+Example:
+
+```
+    requiredScopeSets = [
+        ["scopeA", "scopeB"],
+        ["scopeC:*"]
+    ]
+    assert scopesMatch(['scopeA', 'scopeB'], requiredScopeSets)
+    assert scopesMatch(['scopeC:xyz'], requiredScopeSets)
+    assert not scopesMatch(['scopeA'], requiredScopeSets)
+    assert not scopesMatch(['scopeC'], requiredScopeSets)
+```
+
+## Relative Date-time Utilities
+A lot of taskcluster APIs requires ISO 8601 time stamps offset into the future
+as way of providing expiration, deadlines, etc. These can be easily created
+using `datetime.datetime.isoformat()`, however, it can be rather error prone
+and tedious to offset `datetime.datetime` objects into the future. Therefore
+this library comes with two utility functions for this purposes.
+
+```python
+dateObject = taskcluster.fromNow("2 days 3 hours 1 minute")
+# datetime.datetime(2017, 1, 21, 17, 8, 1, 607929)
+dateString = taskcluster.fromNowJSON("2 days 3 hours 1 minute")
+# '2017-01-21T17:09:23.240178Z'
+```
+
+By default it will offset the date time into the future, if the offset strings
+are prefixed minus (`-`) the date object will be offset into the past. This is
+useful in some corner cases.
+
+```python
+dateObject = taskcluster.fromNow("- 1 year 2 months 3 weeks 5 seconds");
+# datetime.datetime(2015, 10, 30, 18, 16, 50, 931161)
+```
+
+The offset string is ignorant of whitespace and case insensitive. It may also
+optionally be prefixed plus `+` (if not prefixed minus), any `+` prefix will be
+ignored. However, entries in the offset string must be given in order from
+high to low, ie. `2 years 1 day`. Additionally, various shorthands may be
+employed, as illustrated below.
+
+```
+  years,    year,   yr,   y
+  months,   month,  mo
+  weeks,    week,         w
+  days,     day,          d
+  hours,    hour,         h
+  minutes,  minute, min
+  seconds,  second, sec,  s
+```
+
+The `fromNow` method may also be given a date to be relative to as a second
+argument. This is useful if offset the task expiration relative to the the task
+deadline or doing something similar.  This argument can also be passed as the
+kwarg `dateObj`
+
+```python
+dateObject1 = taskcluster.fromNow("2 days 3 hours");
+dateObject2 = taskcluster.fromNow("1 year", dateObject1);
+taskcluster.fromNow("1 year", dateObj=dateObject1);
+# datetime.datetime(2018, 1, 21, 17, 59, 0, 328934)
+```
+
+## Methods contained in the client library
+
+<!-- START OF GENERATED DOCS -->
+
+### Methods in `taskcluster.Auth`
+```python
+import asyncio # Only for async 
+// Create Auth client instance
+import taskcluster
+import taskcluster.aio
+
+auth = taskcluster.Auth(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncAuth = taskcluster.aio.Auth(options, session=session)
+```
+Authentication related API end-points for Taskcluster and related
+services. These API end-points are of interest if you wish to:
+  * Authorize a request signed with Taskcluster credentials,
+  * Manage clients and roles,
+  * Inspect or audit clients and roles,
+  * Gain access to various services guarded by this API.
+
+Note that in this service "authentication" refers to validating the
+correctness of the supplied credentials (that the caller posesses the
+appropriate access token). This service does not provide any kind of user
+authentication (identifying a particular person).
+
+### Clients
+The authentication service manages _clients_, at a high-level each client
+consists of a `clientId`, an `accessToken`, scopes, and some metadata.
+The `clientId` and `accessToken` can be used for authentication when
+calling Taskcluster APIs.
+
+The client's scopes control the client's access to Taskcluster resources.
+The scopes are *expanded* by substituting roles, as defined below.
+
+### Roles
+A _role_ consists of a `roleId`, a set of scopes and a description.
+Each role constitutes a simple _expansion rule_ that says if you have
+the scope: `assume:<roleId>` you get the set of scopes the role has.
+Think of the `assume:<roleId>` as a scope that allows a client to assume
+a role.
+
+As in scopes the `*` kleene star also have special meaning if it is
+located at the end of a `roleId`. If you have a role with the following
+`roleId`: `my-prefix*`, then any client which has a scope staring with
+`assume:my-prefix` will be allowed to assume the role.
+
+### Guarded Services
+The authentication service also has API end-points for delegating access
+to some guarded service such as AWS S3, or Azure Table Storage.
+Generally, we add API end-points to this server when we wish to use
+Taskcluster credentials to grant access to a third-party service used
+by many Taskcluster components.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+auth.ping() # -> None`
+# Async call
+await asyncAuth.ping() # -> None
+```
+
+#### List Clients
+Get a list of all clients.  With `prefix`, only clients for which
+it is a prefix of the clientId are returned.
+
+By default this end-point will try to return up to 1000 clients in one
+request. But it **may return less, even none**.
+It may also return a `continuationToken` even though there are no more
+results. However, you can only be sure to have seen all results if you
+keep calling `listClients` with the last `continuationToken` until you
+get a result without a `continuationToken`.
+
+
+Required [output schema](v1/list-clients-response.json#)
+
+```python
+# Sync calls
+auth.listClients() # -> result`
+# Async call
+await asyncAuth.listClients() # -> result
+```
+
+#### Get Client
+Get information about a single client.
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+Required [output schema](v1/get-client-response.json#)
+
+```python
+# Sync calls
+auth.client(clientId) # -> result`
+auth.client(clientId='value') # -> result
+# Async call
+await asyncAuth.client(clientId) # -> result
+await asyncAuth.client(clientId='value') # -> result
+```
+
+#### Create Client
+Create a new client and get the `accessToken` for this client.
+You should store the `accessToken` from this API call as there is no
+other way to retrieve it.
+
+If you loose the `accessToken` you can call `resetAccessToken` to reset
+it, and a new `accessToken` will be returned, but you cannot retrieve the
+current `accessToken`.
+
+If a client with the same `clientId` already exists this operation will
+fail. Use `updateClient` if you wish to update an existing client.
+
+The caller's scopes must satisfy `scopes`.
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+Required [input schema](v1/create-client-request.json#)
+
+Required [output schema](v1/create-client-response.json#)
+
+```python
+# Sync calls
+auth.createClient(clientId, payload) # -> result`
+auth.createClient(payload, clientId='value') # -> result
+# Async call
+await asyncAuth.createClient(clientId, payload) # -> result
+await asyncAuth.createClient(payload, clientId='value') # -> result
+```
+
+#### Reset `accessToken`
+Reset a clients `accessToken`, this will revoke the existing
+`accessToken`, generate a new `accessToken` and return it from this
+call.
+
+There is no way to retrieve an existing `accessToken`, so if you loose it
+you must reset the accessToken to acquire it again.
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+Required [output schema](v1/create-client-response.json#)
+
+```python
+# Sync calls
+auth.resetAccessToken(clientId) # -> result`
+auth.resetAccessToken(clientId='value') # -> result
+# Async call
+await asyncAuth.resetAccessToken(clientId) # -> result
+await asyncAuth.resetAccessToken(clientId='value') # -> result
+```
+
+#### Update Client
+Update an exisiting client. The `clientId` and `accessToken` cannot be
+updated, but `scopes` can be modified.  The caller's scopes must
+satisfy all scopes being added to the client in the update operation.
+If no scopes are given in the request, the client's scopes remain
+unchanged
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+Required [input schema](v1/create-client-request.json#)
+
+Required [output schema](v1/get-client-response.json#)
+
+```python
+# Sync calls
+auth.updateClient(clientId, payload) # -> result`
+auth.updateClient(payload, clientId='value') # -> result
+# Async call
+await asyncAuth.updateClient(clientId, payload) # -> result
+await asyncAuth.updateClient(payload, clientId='value') # -> result
+```
+
+#### Enable Client
+Enable a client that was disabled with `disableClient`.  If the client
+is already enabled, this does nothing.
+
+This is typically used by identity providers to re-enable clients that
+had been disabled when the corresponding identity's scopes changed.
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+Required [output schema](v1/get-client-response.json#)
+
+```python
+# Sync calls
+auth.enableClient(clientId) # -> result`
+auth.enableClient(clientId='value') # -> result
+# Async call
+await asyncAuth.enableClient(clientId) # -> result
+await asyncAuth.enableClient(clientId='value') # -> result
+```
+
+#### Disable Client
+Disable a client.  If the client is already disabled, this does nothing.
+
+This is typically used by identity providers to disable clients when the
+corresponding identity's scopes no longer satisfy the client's scopes.
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+Required [output schema](v1/get-client-response.json#)
+
+```python
+# Sync calls
+auth.disableClient(clientId) # -> result`
+auth.disableClient(clientId='value') # -> result
+# Async call
+await asyncAuth.disableClient(clientId) # -> result
+await asyncAuth.disableClient(clientId='value') # -> result
+```
+
+#### Delete Client
+Delete a client, please note that any roles related to this client must
+be deleted independently.
+
+
+
+Takes the following arguments:
+
+  * `clientId`
+
+```python
+# Sync calls
+auth.deleteClient(clientId) # -> None`
+auth.deleteClient(clientId='value') # -> None
+# Async call
+await asyncAuth.deleteClient(clientId) # -> None
+await asyncAuth.deleteClient(clientId='value') # -> None
+```
+
+#### List Roles
+Get a list of all roles, each role object also includes the list of
+scopes it expands to.
+
+
+Required [output schema](v1/list-roles-response.json#)
+
+```python
+# Sync calls
+auth.listRoles() # -> result`
+# Async call
+await asyncAuth.listRoles() # -> result
+```
+
+#### Get Role
+Get information about a single role, including the set of scopes that the
+role expands to.
+
+
+
+Takes the following arguments:
+
+  * `roleId`
+
+Required [output schema](v1/get-role-response.json#)
+
+```python
+# Sync calls
+auth.role(roleId) # -> result`
+auth.role(roleId='value') # -> result
+# Async call
+await asyncAuth.role(roleId) # -> result
+await asyncAuth.role(roleId='value') # -> result
+```
+
+#### Create Role
+Create a new role.
+
+The caller's scopes must satisfy the new role's scopes.
+
+If there already exists a role with the same `roleId` this operation
+will fail. Use `updateRole` to modify an existing role.
+
+Creation of a role that will generate an infinite expansion will result
+in an error response.
+
+
+
+Takes the following arguments:
+
+  * `roleId`
+
+Required [input schema](v1/create-role-request.json#)
+
+Required [output schema](v1/get-role-response.json#)
+
+```python
+# Sync calls
+auth.createRole(roleId, payload) # -> result`
+auth.createRole(payload, roleId='value') # -> result
+# Async call
+await asyncAuth.createRole(roleId, payload) # -> result
+await asyncAuth.createRole(payload, roleId='value') # -> result
+```
+
+#### Update Role
+Update an existing role.
+
+The caller's scopes must satisfy all of the new scopes being added, but
+need not satisfy all of the client's existing scopes.
+
+An update of a role that will generate an infinite expansion will result
+in an error response.
+
+
+
+Takes the following arguments:
+
+  * `roleId`
+
+Required [input schema](v1/create-role-request.json#)
+
+Required [output schema](v1/get-role-response.json#)
+
+```python
+# Sync calls
+auth.updateRole(roleId, payload) # -> result`
+auth.updateRole(payload, roleId='value') # -> result
+# Async call
+await asyncAuth.updateRole(roleId, payload) # -> result
+await asyncAuth.updateRole(payload, roleId='value') # -> result
+```
+
+#### Delete Role
+Delete a role. This operation will succeed regardless of whether or not
+the role exists.
+
+
+
+Takes the following arguments:
+
+  * `roleId`
+
+```python
+# Sync calls
+auth.deleteRole(roleId) # -> None`
+auth.deleteRole(roleId='value') # -> None
+# Async call
+await asyncAuth.deleteRole(roleId) # -> None
+await asyncAuth.deleteRole(roleId='value') # -> None
+```
+
+#### Expand Scopes
+Return an expanded copy of the given scopeset, with scopes implied by any
+roles included.
+
+This call uses the GET method with an HTTP body.  It remains only for
+backward compatibility.
+
+
+Required [input schema](v1/scopeset.json#)
+
+Required [output schema](v1/scopeset.json#)
+
+```python
+# Sync calls
+auth.expandScopesGet(payload) # -> result`
+# Async call
+await asyncAuth.expandScopesGet(payload) # -> result
+```
+
+#### Expand Scopes
+Return an expanded copy of the given scopeset, with scopes implied by any
+roles included.
+
+
+Required [input schema](v1/scopeset.json#)
+
+Required [output schema](v1/scopeset.json#)
+
+```python
+# Sync calls
+auth.expandScopes(payload) # -> result`
+# Async call
+await asyncAuth.expandScopes(payload) # -> result
+```
+
+#### Get Current Scopes
+Return the expanded scopes available in the request, taking into account all sources
+of scopes and scope restrictions (temporary credentials, assumeScopes, client scopes,
+and roles).
+
+
+Required [output schema](v1/scopeset.json#)
+
+```python
+# Sync calls
+auth.currentScopes() # -> result`
+# Async call
+await asyncAuth.currentScopes() # -> result
+```
+
+#### Get Temporary Read/Write Credentials S3
+Get temporary AWS credentials for `read-write` or `read-only` access to
+a given `bucket` and `prefix` within that bucket.
+The `level` parameter can be `read-write` or `read-only` and determines
+which type of credentials are returned. Please note that the `level`
+parameter is required in the scope guarding access.  The bucket name must
+not contain `.`, as recommended by Amazon.
+
+This method can only allow access to a whitelisted set of buckets.  To add
+a bucket to that whitelist, contact the Taskcluster team, who will add it to
+the appropriate IAM policy.  If the bucket is in a different AWS account, you
+will also need to add a bucket policy allowing access from the Taskcluster
+account.  That policy should look like this:
+
+```js
+{
+  "Version": "2012-10-17",
+  "Statement": [
+    {
+      "Sid": "allow-taskcluster-auth-to-delegate-access",
+      "Effect": "Allow",
+      "Principal": {
+        "AWS": "arn:aws:iam::692406183521:root"
+      },
+      "Action": [
+        "s3:ListBucket",
+        "s3:GetObject",
+        "s3:PutObject",
+        "s3:DeleteObject",
+        "s3:GetBucketLocation"
+      ],
+      "Resource": [
+        "arn:aws:s3:::<bucket>",
+        "arn:aws:s3:::<bucket>/*"
+      ]
+    }
+  ]
+}
+```
+
+The credentials are set to expire after an hour, but this behavior is
+subject to change. Hence, you should always read the `expires` property
+from the response, if you intend to maintain active credentials in your
+application.
+
+Please note that your `prefix` may not start with slash `/`. Such a prefix
+is allowed on S3, but we forbid it here to discourage bad behavior.
+
+Also note that if your `prefix` doesn't end in a slash `/`, the STS
+credentials may allow access to unexpected keys, as S3 does not treat
+slashes specially.  For example, a prefix of `my-folder` will allow
+access to `my-folder/file.txt` as expected, but also to `my-folder.txt`,
+which may not be intended.
+
+Finally, note that the `PutObjectAcl` call is not allowed.  Passing a canned
+ACL other than `private` to `PutObject` is treated as a `PutObjectAcl` call, and
+will result in an access-denied error from AWS.  This limitation is due to a
+security flaw in Amazon S3 which might otherwise allow indefinite access to
+uploaded objects.
+
+**EC2 metadata compatibility**, if the querystring parameter
+`?format=iam-role-compat` is given, the response will be compatible
+with the JSON exposed by the EC2 metadata service. This aims to ease
+compatibility for libraries and tools built to auto-refresh credentials.
+For details on the format returned by EC2 metadata service see:
+[EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials).
+
+
+
+Takes the following arguments:
+
+  * `level`
+  * `bucket`
+  * `prefix`
+
+Required [output schema](v1/aws-s3-credentials-response.json#)
+
+```python
+# Sync calls
+auth.awsS3Credentials(level, bucket, prefix) # -> result`
+auth.awsS3Credentials(level='value', bucket='value', prefix='value') # -> result
+# Async call
+await asyncAuth.awsS3Credentials(level, bucket, prefix) # -> result
+await asyncAuth.awsS3Credentials(level='value', bucket='value', prefix='value') # -> result
+```
+
+#### List Accounts Managed by Auth
+Retrieve a list of all Azure accounts managed by Taskcluster Auth.
+
+
+Required [output schema](v1/azure-account-list-response.json#)
+
+```python
+# Sync calls
+auth.azureAccounts() # -> result`
+# Async call
+await asyncAuth.azureAccounts() # -> result
+```
+
+#### List Tables in an Account Managed by Auth
+Retrieve a list of all tables in an account.
+
+
+
+Takes the following arguments:
+
+  * `account`
+
+Required [output schema](v1/azure-table-list-response.json#)
+
+```python
+# Sync calls
+auth.azureTables(account) # -> result`
+auth.azureTables(account='value') # -> result
+# Async call
+await asyncAuth.azureTables(account) # -> result
+await asyncAuth.azureTables(account='value') # -> result
+```
+
+#### Get Shared-Access-Signature for Azure Table
+Get a shared access signature (SAS) string for use with a specific Azure
+Table Storage table.
+
+The `level` parameter can be `read-write` or `read-only` and determines
+which type of credentials are returned.  If level is read-write, it will create the
+table if it doesn't already exist.
+
+
+
+Takes the following arguments:
+
+  * `account`
+  * `table`
+  * `level`
+
+Required [output schema](v1/azure-table-access-response.json#)
+
+```python
+# Sync calls
+auth.azureTableSAS(account, table, level) # -> result`
+auth.azureTableSAS(account='value', table='value', level='value') # -> result
+# Async call
+await asyncAuth.azureTableSAS(account, table, level) # -> result
+await asyncAuth.azureTableSAS(account='value', table='value', level='value') # -> result
+```
+
+#### List containers in an Account Managed by Auth
+Retrieve a list of all containers in an account.
+
+
+
+Takes the following arguments:
+
+  * `account`
+
+Required [output schema](v1/azure-container-list-response.json#)
+
+```python
+# Sync calls
+auth.azureContainers(account) # -> result`
+auth.azureContainers(account='value') # -> result
+# Async call
+await asyncAuth.azureContainers(account) # -> result
+await asyncAuth.azureContainers(account='value') # -> result
+```
+
+#### Get Shared-Access-Signature for Azure Container
+Get a shared access signature (SAS) string for use with a specific Azure
+Blob Storage container.
+
+The `level` parameter can be `read-write` or `read-only` and determines
+which type of credentials are returned.  If level is read-write, it will create the
+container if it doesn't already exist.
+
+
+
+Takes the following arguments:
+
+  * `account`
+  * `container`
+  * `level`
+
+Required [output schema](v1/azure-container-response.json#)
+
+```python
+# Sync calls
+auth.azureContainerSAS(account, container, level) # -> result`
+auth.azureContainerSAS(account='value', container='value', level='value') # -> result
+# Async call
+await asyncAuth.azureContainerSAS(account, container, level) # -> result
+await asyncAuth.azureContainerSAS(account='value', container='value', level='value') # -> result
+```
+
+#### Get DSN for Sentry Project
+Get temporary DSN (access credentials) for a sentry project.
+The credentials returned can be used with any Sentry client for up to
+24 hours, after which the credentials will be automatically disabled.
+
+If the project doesn't exist it will be created, and assigned to the
+initial team configured for this component. Contact a Sentry admin
+to have the project transferred to a team you have access to if needed
+
+
+
+Takes the following arguments:
+
+  * `project`
+
+Required [output schema](v1/sentry-dsn-response.json#)
+
+```python
+# Sync calls
+auth.sentryDSN(project) # -> result`
+auth.sentryDSN(project='value') # -> result
+# Async call
+await asyncAuth.sentryDSN(project) # -> result
+await asyncAuth.sentryDSN(project='value') # -> result
+```
+
+#### Get Token for Statsum Project
+Get temporary `token` and `baseUrl` for sending metrics to statsum.
+
+The token is valid for 24 hours, clients should refresh after expiration.
+
+
+
+Takes the following arguments:
+
+  * `project`
+
+Required [output schema](v1/statsum-token-response.json#)
+
+```python
+# Sync calls
+auth.statsumToken(project) # -> result`
+auth.statsumToken(project='value') # -> result
+# Async call
+await asyncAuth.statsumToken(project) # -> result
+await asyncAuth.statsumToken(project='value') # -> result
+```
+
+#### Get Token for Webhooktunnel Proxy
+Get temporary `token` and `id` for connecting to webhooktunnel
+The token is valid for 96 hours, clients should refresh after expiration.
+
+
+Required [output schema](v1/webhooktunnel-token-response.json#)
+
+```python
+# Sync calls
+auth.webhooktunnelToken() # -> result`
+# Async call
+await asyncAuth.webhooktunnelToken() # -> result
+```
+
+#### Authenticate Hawk Request
+Validate the request signature given on input and return list of scopes
+that the authenticating client has.
+
+This method is used by other services that wish rely on Taskcluster
+credentials for authentication. This way we can use Hawk without having
+the secret credentials leave this service.
+
+
+Required [input schema](v1/authenticate-hawk-request.json#)
+
+Required [output schema](v1/authenticate-hawk-response.json#)
+
+```python
+# Sync calls
+auth.authenticateHawk(payload) # -> result`
+# Async call
+await asyncAuth.authenticateHawk(payload) # -> result
+```
+
+#### Test Authentication
+Utility method to test client implementations of Taskcluster
+authentication.
+
+Rather than using real credentials, this endpoint accepts requests with
+clientId `tester` and accessToken `no-secret`. That client's scopes are
+based on `clientScopes` in the request body.
+
+The request is validated, with any certificate, authorizedScopes, etc.
+applied, and the resulting scopes are checked against `requiredScopes`
+from the request body. On success, the response contains the clientId
+and scopes as seen by the API method.
+
+
+Required [input schema](v1/test-authenticate-request.json#)
+
+Required [output schema](v1/test-authenticate-response.json#)
+
+```python
+# Sync calls
+auth.testAuthenticate(payload) # -> result`
+# Async call
+await asyncAuth.testAuthenticate(payload) # -> result
+```
+
+#### Test Authentication (GET)
+Utility method similar to `testAuthenticate`, but with the GET method,
+so it can be used with signed URLs (bewits).
+
+Rather than using real credentials, this endpoint accepts requests with
+clientId `tester` and accessToken `no-secret`. That client's scopes are
+`['test:*', 'auth:create-client:test:*']`.  The call fails if the 
+`test:authenticate-get` scope is not available.
+
+The request is validated, with any certificate, authorizedScopes, etc.
+applied, and the resulting scopes are checked, just like any API call.
+On success, the response contains the clientId and scopes as seen by
+the API method.
+
+This method may later be extended to allow specification of client and
+required scopes via query arguments.
+
+
+Required [output schema](v1/test-authenticate-response.json#)
+
+```python
+# Sync calls
+auth.testAuthenticateGet() # -> result`
+# Async call
+await asyncAuth.testAuthenticateGet() # -> result
+```
+
+
+
+
+### Exchanges in `taskcluster.AuthEvents`
+```python
+// Create AuthEvents client instance
+import taskcluster
+authEvents = taskcluster.AuthEvents(options)
+```
+The auth service, typically available at `auth.taskcluster.net`
+is responsible for storing credentials, managing assignment of scopes,
+and validation of request signatures from other services.
+
+These exchanges provides notifications when credentials or roles are
+updated. This is mostly so that multiple instances of the auth service
+can purge their caches and synchronize state. But you are of course
+welcome to use these for other purposes, monitoring changes for example.
+#### Client Created Messages
+ * `authEvents.clientCreated(routingKeyPattern) -> routingKey`
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Client Updated Messages
+ * `authEvents.clientUpdated(routingKeyPattern) -> routingKey`
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Client Deleted Messages
+ * `authEvents.clientDeleted(routingKeyPattern) -> routingKey`
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Role Created Messages
+ * `authEvents.roleCreated(routingKeyPattern) -> routingKey`
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Role Updated Messages
+ * `authEvents.roleUpdated(routingKeyPattern) -> routingKey`
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Role Deleted Messages
+ * `authEvents.roleDeleted(routingKeyPattern) -> routingKey`
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+
+
+
+### Methods in `taskcluster.AwsProvisioner`
+```python
+import asyncio # Only for async 
+// Create AwsProvisioner client instance
+import taskcluster
+import taskcluster.aio
+
+awsProvisioner = taskcluster.AwsProvisioner(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncAwsProvisioner = taskcluster.aio.AwsProvisioner(options, session=session)
+```
+The AWS Provisioner is responsible for provisioning instances on EC2 for use in
+Taskcluster.  The provisioner maintains a set of worker configurations which
+can be managed with an API that is typically available at
+aws-provisioner.taskcluster.net/v1.  This API can also perform basic instance
+management tasks in addition to maintaining the internal state of worker type
+configuration information.
+
+The Provisioner runs at a configurable interval.  Each iteration of the
+provisioner fetches a current copy the state that the AWS EC2 api reports.  In
+each iteration, we ask the Queue how many tasks are pending for that worker
+type.  Based on the number of tasks pending and the scaling ratio, we may
+submit requests for new instances.  We use pricing information, capacity and
+utility factor information to decide which instance type in which region would
+be the optimal configuration.
+
+Each EC2 instance type will declare a capacity and utility factor.  Capacity is
+the number of tasks that a given machine is capable of running concurrently.
+Utility factor is a relative measure of performance between two instance types.
+We multiply the utility factor by the spot price to compare instance types and
+regions when making the bidding choices.
+
+When a new EC2 instance is instantiated, its user data contains a token in
+`securityToken` that can be used with the `getSecret` method to retrieve
+the worker's credentials and any needed passwords or other restricted
+information.  The worker is responsible for deleting the secret after
+retrieving it, to prevent dissemination of the secret to other proceses
+which can read the instance user data.
+
+#### List worker types with details
+Return a list of worker types, including some summary information about
+current capacity for each.  While this list includes all defined worker types,
+there may be running EC2 instances for deleted worker types that are not
+included here.  The list is unordered.
+
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-summaries-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.listWorkerTypeSummaries() # -> result`
+# Async call
+await asyncAwsProvisioner.listWorkerTypeSummaries() # -> result
+```
+
+#### Create new Worker Type
+Create a worker type.  A worker type contains all the configuration
+needed for the provisioner to manage the instances.  Each worker type
+knows which regions and which instance types are allowed for that
+worker type.  Remember that Capacity is the number of concurrent tasks
+that can be run on a given EC2 resource and that Utility is the relative
+performance rate between different instance types.  There is no way to
+configure different regions to have different sets of instance types
+so ensure that all instance types are available in all regions.
+This function is idempotent.
+
+Once a worker type is in the provisioner, a back ground process will
+begin creating instances for it based on its capacity bounds and its
+pending task count from the Queue.  It is the worker's responsibility
+to shut itself down.  The provisioner has a limit (currently 96hours)
+for all instances to prevent zombie instances from running indefinitely.
+
+The provisioner will ensure that all instances created are tagged with
+aws resource tags containing the provisioner id and the worker type.
+
+If provided, the secrets in the global, region and instance type sections
+are available using the secrets api.  If specified, the scopes provided
+will be used to generate a set of temporary credentials available with
+the other secrets.
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [input schema](http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#)
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.createWorkerType(workerType, payload) # -> result`
+awsProvisioner.createWorkerType(payload, workerType='value') # -> result
+# Async call
+await asyncAwsProvisioner.createWorkerType(workerType, payload) # -> result
+await asyncAwsProvisioner.createWorkerType(payload, workerType='value') # -> result
+```
+
+#### Update Worker Type
+Provide a new copy of a worker type to replace the existing one.
+This will overwrite the existing worker type definition if there
+is already a worker type of that name.  This method will return a
+200 response along with a copy of the worker type definition created
+Note that if you are using the result of a GET on the worker-type
+end point that you will need to delete the lastModified and workerType
+keys from the object returned, since those fields are not allowed
+the request body for this method
+
+Otherwise, all input requirements and actions are the same as the
+create method.
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [input schema](http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#)
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.updateWorkerType(workerType, payload) # -> result`
+awsProvisioner.updateWorkerType(payload, workerType='value') # -> result
+# Async call
+await asyncAwsProvisioner.updateWorkerType(workerType, payload) # -> result
+await asyncAwsProvisioner.updateWorkerType(payload, workerType='value') # -> result
+```
+
+#### Get Worker Type Last Modified Time
+This method is provided to allow workers to see when they were
+last modified.  The value provided through UserData can be
+compared against this value to see if changes have been made
+If the worker type definition has not been changed, the date
+should be identical as it is the same stored value.
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-last-modified.json#)
+
+```python
+# Sync calls
+awsProvisioner.workerTypeLastModified(workerType) # -> result`
+awsProvisioner.workerTypeLastModified(workerType='value') # -> result
+# Async call
+await asyncAwsProvisioner.workerTypeLastModified(workerType) # -> result
+await asyncAwsProvisioner.workerTypeLastModified(workerType='value') # -> result
+```
+
+#### Get Worker Type
+Retrieve a copy of the requested worker type definition.
+This copy contains a lastModified field as well as the worker
+type name.  As such, it will require manipulation to be able to
+use the results of this method to submit date to the update
+method.
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.workerType(workerType) # -> result`
+awsProvisioner.workerType(workerType='value') # -> result
+# Async call
+await asyncAwsProvisioner.workerType(workerType) # -> result
+await asyncAwsProvisioner.workerType(workerType='value') # -> result
+```
+
+#### Delete Worker Type
+Delete a worker type definition.  This method will only delete
+the worker type definition from the storage table.  The actual
+deletion will be handled by a background worker.  As soon as this
+method is called for a worker type, the background worker will
+immediately submit requests to cancel all spot requests for this
+worker type as well as killing all instances regardless of their
+state.  If you want to gracefully remove a worker type, you must
+either ensure that no tasks are created with that worker type name
+or you could theoretically set maxCapacity to 0, though, this is
+not a supported or tested action
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+```python
+# Sync calls
+awsProvisioner.removeWorkerType(workerType) # -> None`
+awsProvisioner.removeWorkerType(workerType='value') # -> None
+# Async call
+await asyncAwsProvisioner.removeWorkerType(workerType) # -> None
+await asyncAwsProvisioner.removeWorkerType(workerType='value') # -> None
+```
+
+#### List Worker Types
+Return a list of string worker type names.  These are the names
+of all managed worker types known to the provisioner.  This does
+not include worker types which are left overs from a deleted worker
+type definition but are still running in AWS.
+
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.listWorkerTypes() # -> result`
+# Async call
+await asyncAwsProvisioner.listWorkerTypes() # -> result
+```
+
+#### Create new Secret
+Insert a secret into the secret storage.  The supplied secrets will
+be provided verbatime via `getSecret`, while the supplied scopes will
+be converted into credentials by `getSecret`.
+
+This method is not ordinarily used in production; instead, the provisioner
+creates a new secret directly for each spot bid.
+
+
+
+Takes the following arguments:
+
+  * `token`
+
+Required [input schema](http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#)
+
+```python
+# Sync calls
+awsProvisioner.createSecret(token, payload) # -> None`
+awsProvisioner.createSecret(payload, token='value') # -> None
+# Async call
+await asyncAwsProvisioner.createSecret(token, payload) # -> None
+await asyncAwsProvisioner.createSecret(payload, token='value') # -> None
+```
+
+#### Get a Secret
+Retrieve a secret from storage.  The result contains any passwords or
+other restricted information verbatim as well as a temporary credential
+based on the scopes specified when the secret was created.
+
+It is important that this secret is deleted by the consumer (`removeSecret`),
+or else the secrets will be visible to any process which can access the
+user data associated with the instance.
+
+
+
+Takes the following arguments:
+
+  * `token`
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-secret-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.getSecret(token) # -> result`
+awsProvisioner.getSecret(token='value') # -> result
+# Async call
+await asyncAwsProvisioner.getSecret(token) # -> result
+await asyncAwsProvisioner.getSecret(token='value') # -> result
+```
+
+#### Report an instance starting
+An instance will report in by giving its instance id as well
+as its security token.  The token is given and checked to ensure
+that it matches a real token that exists to ensure that random
+machines do not check in.  We could generate a different token
+but that seems like overkill
+
+
+
+Takes the following arguments:
+
+  * `instanceId`
+  * `token`
+
+```python
+# Sync calls
+awsProvisioner.instanceStarted(instanceId, token) # -> None`
+awsProvisioner.instanceStarted(instanceId='value', token='value') # -> None
+# Async call
+await asyncAwsProvisioner.instanceStarted(instanceId, token) # -> None
+await asyncAwsProvisioner.instanceStarted(instanceId='value', token='value') # -> None
+```
+
+#### Remove a Secret
+Remove a secret.  After this call, a call to `getSecret` with the given
+token will return no information.
+
+It is very important that the consumer of a 
+secret delete the secret from storage before handing over control
+to untrusted processes to prevent credential and/or secret leakage.
+
+
+
+Takes the following arguments:
+
+  * `token`
+
+```python
+# Sync calls
+awsProvisioner.removeSecret(token) # -> None`
+awsProvisioner.removeSecret(token='value') # -> None
+# Async call
+await asyncAwsProvisioner.removeSecret(token) # -> None
+await asyncAwsProvisioner.removeSecret(token='value') # -> None
+```
+
+#### Get All Launch Specifications for WorkerType
+This method returns a preview of all possible launch specifications
+that this worker type definition could submit to EC2.  It is used to
+test worker types, nothing more
+
+**This API end-point is experimental and may be subject to change without warning.**
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.getLaunchSpecs(workerType) # -> result`
+awsProvisioner.getLaunchSpecs(workerType='value') # -> result
+# Async call
+await asyncAwsProvisioner.getLaunchSpecs(workerType) # -> result
+await asyncAwsProvisioner.getLaunchSpecs(workerType='value') # -> result
+```
+
+#### Get AWS State for a worker type
+Return the state of a given workertype as stored by the provisioner. 
+This state is stored as three lists: 1 for running instances, 1 for
+pending requests.  The `summary` property contains an updated summary
+similar to that returned from `listWorkerTypeSummaries`.
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+```python
+# Sync calls
+awsProvisioner.state(workerType) # -> None`
+awsProvisioner.state(workerType='value') # -> None
+# Async call
+await asyncAwsProvisioner.state(workerType) # -> None
+await asyncAwsProvisioner.state(workerType='value') # -> None
+```
+
+#### Backend Status
+This endpoint is used to show when the last time the provisioner
+has checked in.  A check in is done through the deadman's snitch
+api.  It is done at the conclusion of a provisioning iteration
+and used to tell if the background provisioning process is still
+running.
+
+**Warning** this api end-point is **not stable**.
+
+
+Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/backend-status-response.json#)
+
+```python
+# Sync calls
+awsProvisioner.backendStatus() # -> result`
+# Async call
+await asyncAwsProvisioner.backendStatus() # -> result
+```
+
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+awsProvisioner.ping() # -> None`
+# Async call
+await asyncAwsProvisioner.ping() # -> None
+```
+
+
+
+
+### Exchanges in `taskcluster.AwsProvisionerEvents`
+```python
+// Create AwsProvisionerEvents client instance
+import taskcluster
+awsProvisionerEvents = taskcluster.AwsProvisionerEvents(options)
+```
+Exchanges from the provisioner... more docs later
+#### WorkerType Created Message
+ * `awsProvisionerEvents.workerTypeCreated(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `workerType` is required  Description: WorkerType that this message concerns.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### WorkerType Updated Message
+ * `awsProvisionerEvents.workerTypeUpdated(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `workerType` is required  Description: WorkerType that this message concerns.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### WorkerType Removed Message
+ * `awsProvisionerEvents.workerTypeRemoved(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `workerType` is required  Description: WorkerType that this message concerns.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+
+
+
+### Methods in `taskcluster.EC2Manager`
+```python
+import asyncio # Only for async 
+// Create EC2Manager client instance
+import taskcluster
+import taskcluster.aio
+
+eC2Manager = taskcluster.EC2Manager(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncEC2Manager = taskcluster.aio.EC2Manager(options, session=session)
+```
+A taskcluster service which manages EC2 instances.  This service does not understand any taskcluster concepts intrinsicaly other than using the name `workerType` to refer to a group of associated instances.  Unless you are working on building a provisioner for AWS, you almost certainly do not want to use this service
+#### See the list of worker types which are known to be managed
+This method is only for debugging the ec2-manager
+
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/list-worker-types.json#)
+
+```python
+# Sync calls
+eC2Manager.listWorkerTypes() # -> result`
+# Async call
+await asyncEC2Manager.listWorkerTypes() # -> result
+```
+
+#### Run an instance
+Request an instance of a worker type
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [input schema](http://schemas.taskcluster.net/ec2-manager/v1/run-instance-request.json#)
+
+```python
+# Sync calls
+eC2Manager.runInstance(workerType, payload) # -> None`
+eC2Manager.runInstance(payload, workerType='value') # -> None
+# Async call
+await asyncEC2Manager.runInstance(workerType, payload) # -> None
+await asyncEC2Manager.runInstance(payload, workerType='value') # -> None
+```
+
+#### Terminate all resources from a worker type
+Terminate all instances for this worker type
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+```python
+# Sync calls
+eC2Manager.terminateWorkerType(workerType) # -> None`
+eC2Manager.terminateWorkerType(workerType='value') # -> None
+# Async call
+await asyncEC2Manager.terminateWorkerType(workerType) # -> None
+await asyncEC2Manager.terminateWorkerType(workerType='value') # -> None
+```
+
+#### Look up the resource stats for a workerType
+Return an object which has a generic state description. This only contains counts of instances
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/worker-type-resources.json#)
+
+```python
+# Sync calls
+eC2Manager.workerTypeStats(workerType) # -> result`
+eC2Manager.workerTypeStats(workerType='value') # -> result
+# Async call
+await asyncEC2Manager.workerTypeStats(workerType) # -> result
+await asyncEC2Manager.workerTypeStats(workerType='value') # -> result
+```
+
+#### Look up the resource health for a workerType
+Return a view of the health of a given worker type
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/health.json#)
+
+```python
+# Sync calls
+eC2Manager.workerTypeHealth(workerType) # -> result`
+eC2Manager.workerTypeHealth(workerType='value') # -> result
+# Async call
+await asyncEC2Manager.workerTypeHealth(workerType) # -> result
+await asyncEC2Manager.workerTypeHealth(workerType='value') # -> result
+```
+
+#### Look up the most recent errors of a workerType
+Return a list of the most recent errors encountered by a worker type
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/errors.json#)
+
+```python
+# Sync calls
+eC2Manager.workerTypeErrors(workerType) # -> result`
+eC2Manager.workerTypeErrors(workerType='value') # -> result
+# Async call
+await asyncEC2Manager.workerTypeErrors(workerType) # -> result
+await asyncEC2Manager.workerTypeErrors(workerType='value') # -> result
+```
+
+#### Look up the resource state for a workerType
+Return state information for a given worker type
+
+
+
+Takes the following arguments:
+
+  * `workerType`
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/worker-type-state.json#)
+
+```python
+# Sync calls
+eC2Manager.workerTypeState(workerType) # -> result`
+eC2Manager.workerTypeState(workerType='value') # -> result
+# Async call
+await asyncEC2Manager.workerTypeState(workerType) # -> result
+await asyncEC2Manager.workerTypeState(workerType='value') # -> result
+```
+
+#### Ensure a KeyPair for a given worker type exists
+Idempotently ensure that a keypair of a given name exists
+
+
+
+Takes the following arguments:
+
+  * `name`
+
+Required [input schema](http://schemas.taskcluster.net/ec2-manager/v1/create-key-pair.json#)
+
+```python
+# Sync calls
+eC2Manager.ensureKeyPair(name, payload) # -> None`
+eC2Manager.ensureKeyPair(payload, name='value') # -> None
+# Async call
+await asyncEC2Manager.ensureKeyPair(name, payload) # -> None
+await asyncEC2Manager.ensureKeyPair(payload, name='value') # -> None
+```
+
+#### Ensure a KeyPair for a given worker type does not exist
+Ensure that a keypair of a given name does not exist.
+
+
+
+Takes the following arguments:
+
+  * `name`
+
+```python
+# Sync calls
+eC2Manager.removeKeyPair(name) # -> None`
+eC2Manager.removeKeyPair(name='value') # -> None
+# Async call
+await asyncEC2Manager.removeKeyPair(name) # -> None
+await asyncEC2Manager.removeKeyPair(name='value') # -> None
+```
+
+#### Terminate an instance
+Terminate an instance in a specified region
+
+
+
+Takes the following arguments:
+
+  * `region`
+  * `instanceId`
+
+```python
+# Sync calls
+eC2Manager.terminateInstance(region, instanceId) # -> None`
+eC2Manager.terminateInstance(region='value', instanceId='value') # -> None
+# Async call
+await asyncEC2Manager.terminateInstance(region, instanceId) # -> None
+await asyncEC2Manager.terminateInstance(region='value', instanceId='value') # -> None
+```
+
+#### Request prices for EC2
+Return a list of possible prices for EC2
+
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/prices.json#)
+
+```python
+# Sync calls
+eC2Manager.getPrices() # -> result`
+# Async call
+await asyncEC2Manager.getPrices() # -> result
+```
+
+#### Request prices for EC2
+Return a list of possible prices for EC2
+
+
+Required [input schema](http://schemas.taskcluster.net/ec2-manager/v1/prices-request.json#)
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/prices.json#)
+
+```python
+# Sync calls
+eC2Manager.getSpecificPrices(payload) # -> result`
+# Async call
+await asyncEC2Manager.getSpecificPrices(payload) # -> result
+```
+
+#### Get EC2 account health metrics
+Give some basic stats on the health of our EC2 account
+
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/health.json#)
+
+```python
+# Sync calls
+eC2Manager.getHealth() # -> result`
+# Async call
+await asyncEC2Manager.getHealth() # -> result
+```
+
+#### Look up the most recent errors in the provisioner across all worker types
+Return a list of recent errors encountered
+
+
+Required [output schema](http://schemas.taskcluster.net/ec2-manager/v1/errors.json#)
+
+```python
+# Sync calls
+eC2Manager.getRecentErrors() # -> result`
+# Async call
+await asyncEC2Manager.getRecentErrors() # -> result
+```
+
+#### See the list of regions managed by this ec2-manager
+This method is only for debugging the ec2-manager
+
+
+```python
+# Sync calls
+eC2Manager.regions() # -> None`
+# Async call
+await asyncEC2Manager.regions() # -> None
+```
+
+#### See the list of AMIs and their usage
+List AMIs and their usage by returning a list of objects in the form:
+{
+region: string
+  volumetype: string
+  lastused: timestamp
+}
+
+
+```python
+# Sync calls
+eC2Manager.amiUsage() # -> None`
+# Async call
+await asyncEC2Manager.amiUsage() # -> None
+```
+
+#### See the current EBS volume usage list
+Lists current EBS volume usage by returning a list of objects
+that are uniquely defined by {region, volumetype, state} in the form:
+{
+region: string,
+  volumetype: string,
+  state: string,
+  totalcount: integer,
+  totalgb: integer,
+  touched: timestamp (last time that information was updated),
+}
+
+
+```python
+# Sync calls
+eC2Manager.ebsUsage() # -> None`
+# Async call
+await asyncEC2Manager.ebsUsage() # -> None
+```
+
+#### Statistics on the Database client pool
+This method is only for debugging the ec2-manager
+
+
+```python
+# Sync calls
+eC2Manager.dbpoolStats() # -> None`
+# Async call
+await asyncEC2Manager.dbpoolStats() # -> None
+```
+
+#### List out the entire internal state
+This method is only for debugging the ec2-manager
+
+
+```python
+# Sync calls
+eC2Manager.allState() # -> None`
+# Async call
+await asyncEC2Manager.allState() # -> None
+```
+
+#### Statistics on the sqs queues
+This method is only for debugging the ec2-manager
+
+
+```python
+# Sync calls
+eC2Manager.sqsStats() # -> None`
+# Async call
+await asyncEC2Manager.sqsStats() # -> None
+```
+
+#### Purge the SQS queues
+This method is only for debugging the ec2-manager
+
+
+```python
+# Sync calls
+eC2Manager.purgeQueues() # -> None`
+# Async call
+await asyncEC2Manager.purgeQueues() # -> None
+```
+
+#### API Reference
+Generate an API reference for this service
+
+
+```python
+# Sync calls
+eC2Manager.apiReference() # -> None`
+# Async call
+await asyncEC2Manager.apiReference() # -> None
+```
+
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+eC2Manager.ping() # -> None`
+# Async call
+await asyncEC2Manager.ping() # -> None
+```
+
+
+
+
+### Methods in `taskcluster.Github`
+```python
+import asyncio # Only for async 
+// Create Github client instance
+import taskcluster
+import taskcluster.aio
+
+github = taskcluster.Github(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncGithub = taskcluster.aio.Github(options, session=session)
+```
+The github service, typically available at
+`github.taskcluster.net`, is responsible for publishing pulse
+messages in response to GitHub events.
+
+This document describes the API end-point for consuming GitHub
+web hooks, as well as some useful consumer APIs.
+
+When Github forbids an action, this service returns an HTTP 403
+with code ForbiddenByGithub.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+github.ping() # -> None`
+# Async call
+await asyncGithub.ping() # -> None
+```
+
+#### Consume GitHub WebHook
+Capture a GitHub event and publish it via pulse, if it's a push,
+release or pull request.
+
+
+```python
+# Sync calls
+github.githubWebHookConsumer() # -> None`
+# Async call
+await asyncGithub.githubWebHookConsumer() # -> None
+```
+
+#### List of Builds
+A paginated list of builds that have been run in
+Taskcluster. Can be filtered on various git-specific
+fields.
+
+
+Required [output schema](v1/build-list.json#)
+
+```python
+# Sync calls
+github.builds() # -> result`
+# Async call
+await asyncGithub.builds() # -> result
+```
+
+#### Latest Build Status Badge
+Checks the status of the latest build of a given branch
+and returns corresponding badge svg.
+
+
+
+Takes the following arguments:
+
+  * `owner`
+  * `repo`
+  * `branch`
+
+```python
+# Sync calls
+github.badge(owner, repo, branch) # -> None`
+github.badge(owner='value', repo='value', branch='value') # -> None
+# Async call
+await asyncGithub.badge(owner, repo, branch) # -> None
+await asyncGithub.badge(owner='value', repo='value', branch='value') # -> None
+```
+
+#### Get Repository Info
+Returns any repository metadata that is
+useful within Taskcluster related services.
+
+
+
+Takes the following arguments:
+
+  * `owner`
+  * `repo`
+
+Required [output schema](v1/repository.json#)
+
+```python
+# Sync calls
+github.repository(owner, repo) # -> result`
+github.repository(owner='value', repo='value') # -> result
+# Async call
+await asyncGithub.repository(owner, repo) # -> result
+await asyncGithub.repository(owner='value', repo='value') # -> result
+```
+
+#### Latest Status for Branch
+For a given branch of a repository, this will always point
+to a status page for the most recent task triggered by that
+branch.
+
+Note: This is a redirect rather than a direct link.
+
+
+
+Takes the following arguments:
+
+  * `owner`
+  * `repo`
+  * `branch`
+
+```python
+# Sync calls
+github.latest(owner, repo, branch) # -> None`
+github.latest(owner='value', repo='value', branch='value') # -> None
+# Async call
+await asyncGithub.latest(owner, repo, branch) # -> None
+await asyncGithub.latest(owner='value', repo='value', branch='value') # -> None
+```
+
+#### Post a status against a given changeset
+For a given changeset (SHA) of a repository, this will attach a "commit status"
+on github. These statuses are links displayed next to each revision.
+The status is either OK (green check) or FAILURE (red cross), 
+made of a custom title and link.
+
+
+
+Takes the following arguments:
+
+  * `owner`
+  * `repo`
+  * `sha`
+
+Required [input schema](v1/create-status.json#)
+
+```python
+# Sync calls
+github.createStatus(owner, repo, sha, payload) # -> None`
+github.createStatus(payload, owner='value', repo='value', sha='value') # -> None
+# Async call
+await asyncGithub.createStatus(owner, repo, sha, payload) # -> None
+await asyncGithub.createStatus(payload, owner='value', repo='value', sha='value') # -> None
+```
+
+#### Post a comment on a given GitHub Issue or Pull Request
+For a given Issue or Pull Request of a repository, this will write a new message.
+
+
+
+Takes the following arguments:
+
+  * `owner`
+  * `repo`
+  * `number`
+
+Required [input schema](v1/create-comment.json#)
+
+```python
+# Sync calls
+github.createComment(owner, repo, number, payload) # -> None`
+github.createComment(payload, owner='value', repo='value', number='value') # -> None
+# Async call
+await asyncGithub.createComment(owner, repo, number, payload) # -> None
+await asyncGithub.createComment(payload, owner='value', repo='value', number='value') # -> None
+```
+
+
+
+
+### Exchanges in `taskcluster.GithubEvents`
+```python
+// Create GithubEvents client instance
+import taskcluster
+githubEvents = taskcluster.GithubEvents(options)
+```
+The github service publishes a pulse
+message for supported github events, translating Github webhook
+events into pulse messages.
+
+This document describes the exchange offered by the taskcluster
+github service
+#### GitHub Pull Request Event
+ * `githubEvents.pullRequest(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
+   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
+   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
+   * `action` is required  Description: The GitHub `action` which triggered an event. See for possible values see the payload actions property.
+
+#### GitHub push Event
+ * `githubEvents.push(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
+   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
+   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
+
+#### GitHub release Event
+ * `githubEvents.release(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
+   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
+   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
+
+
+
+
+### Methods in `taskcluster.Hooks`
+```python
+import asyncio # Only for async 
+// Create Hooks client instance
+import taskcluster
+import taskcluster.aio
+
+hooks = taskcluster.Hooks(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncHooks = taskcluster.aio.Hooks(options, session=session)
+```
+Hooks are a mechanism for creating tasks in response to events.
+
+Hooks are identified with a `hookGroupId` and a `hookId`.
+
+When an event occurs, the resulting task is automatically created.  The
+task is created using the scope `assume:hook-id:<hookGroupId>/<hookId>`,
+which must have scopes to make the createTask call, including satisfying all
+scopes in `task.scopes`.  The new task has a `taskGroupId` equal to its
+`taskId`, as is the convention for decision tasks.
+
+Hooks can have a "schedule" indicating specific times that new tasks should
+be created.  Each schedule is in a simple cron format, per 
+https://www.npmjs.com/package/cron-parser.  For example:
+ * `['0 0 1 * * *']` -- daily at 1:00 UTC
+ * `['0 0 9,21 * * 1-5', '0 0 12 * * 0,6']` -- weekdays at 9:00 and 21:00 UTC, weekends at noon
+
+The task definition is used as a JSON-e template, with a context depending on how it is fired.  See
+https://docs.taskcluster.net/reference/core/taskcluster-hooks/docs/firing-hooks
+for more information.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+hooks.ping() # -> None`
+# Async call
+await asyncHooks.ping() # -> None
+```
+
+#### List hook groups
+This endpoint will return a list of all hook groups with at least one hook.
+
+
+Required [output schema](v1/list-hook-groups-response.json#)
+
+```python
+# Sync calls
+hooks.listHookGroups() # -> result`
+# Async call
+await asyncHooks.listHookGroups() # -> result
+```
+
+#### List hooks in a given group
+This endpoint will return a list of all the hook definitions within a
+given hook group.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+
+Required [output schema](v1/list-hooks-response.json#)
+
+```python
+# Sync calls
+hooks.listHooks(hookGroupId) # -> result`
+hooks.listHooks(hookGroupId='value') # -> result
+# Async call
+await asyncHooks.listHooks(hookGroupId) # -> result
+await asyncHooks.listHooks(hookGroupId='value') # -> result
+```
+
+#### Get hook definition
+This endpoint will return the hook definition for the given `hookGroupId`
+and hookId.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [output schema](v1/hook-definition.json#)
+
+```python
+# Sync calls
+hooks.hook(hookGroupId, hookId) # -> result`
+hooks.hook(hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.hook(hookGroupId, hookId) # -> result
+await asyncHooks.hook(hookGroupId='value', hookId='value') # -> result
+```
+
+#### Get hook status
+This endpoint will return the current status of the hook.  This represents a
+snapshot in time and may vary from one call to the next.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [output schema](v1/hook-status.json#)
+
+```python
+# Sync calls
+hooks.getHookStatus(hookGroupId, hookId) # -> result`
+hooks.getHookStatus(hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.getHookStatus(hookGroupId, hookId) # -> result
+await asyncHooks.getHookStatus(hookGroupId='value', hookId='value') # -> result
+```
+
+#### Create a hook
+This endpoint will create a new hook.
+
+The caller's credentials must include the role that will be used to
+create the task.  That role must satisfy task.scopes as well as the
+necessary scopes to add the task to the queue.
+
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [input schema](v1/create-hook-request.json#)
+
+Required [output schema](v1/hook-definition.json#)
+
+```python
+# Sync calls
+hooks.createHook(hookGroupId, hookId, payload) # -> result`
+hooks.createHook(payload, hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.createHook(hookGroupId, hookId, payload) # -> result
+await asyncHooks.createHook(payload, hookGroupId='value', hookId='value') # -> result
+```
+
+#### Update a hook
+This endpoint will update an existing hook.  All fields except
+`hookGroupId` and `hookId` can be modified.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [input schema](v1/create-hook-request.json#)
+
+Required [output schema](v1/hook-definition.json#)
+
+```python
+# Sync calls
+hooks.updateHook(hookGroupId, hookId, payload) # -> result`
+hooks.updateHook(payload, hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.updateHook(hookGroupId, hookId, payload) # -> result
+await asyncHooks.updateHook(payload, hookGroupId='value', hookId='value') # -> result
+```
+
+#### Delete a hook
+This endpoint will remove a hook definition.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+```python
+# Sync calls
+hooks.removeHook(hookGroupId, hookId) # -> None`
+hooks.removeHook(hookGroupId='value', hookId='value') # -> None
+# Async call
+await asyncHooks.removeHook(hookGroupId, hookId) # -> None
+await asyncHooks.removeHook(hookGroupId='value', hookId='value') # -> None
+```
+
+#### Trigger a hook
+This endpoint will trigger the creation of a task from a hook definition.
+
+The HTTP payload must match the hooks `triggerSchema`.  If it does, it is
+provided as the `payload` property of the JSON-e context used to render the
+task template.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [input schema](v1/trigger-hook.json#)
+
+Required [output schema](v1/task-status.json#)
+
+```python
+# Sync calls
+hooks.triggerHook(hookGroupId, hookId, payload) # -> result`
+hooks.triggerHook(payload, hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.triggerHook(hookGroupId, hookId, payload) # -> result
+await asyncHooks.triggerHook(payload, hookGroupId='value', hookId='value') # -> result
+```
+
+#### Get a trigger token
+Retrieve a unique secret token for triggering the specified hook. This
+token can be deactivated with `resetTriggerToken`.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [output schema](v1/trigger-token-response.json#)
+
+```python
+# Sync calls
+hooks.getTriggerToken(hookGroupId, hookId) # -> result`
+hooks.getTriggerToken(hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.getTriggerToken(hookGroupId, hookId) # -> result
+await asyncHooks.getTriggerToken(hookGroupId='value', hookId='value') # -> result
+```
+
+#### Reset a trigger token
+Reset the token for triggering a given hook. This invalidates token that
+may have been issued via getTriggerToken with a new token.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+
+Required [output schema](v1/trigger-token-response.json#)
+
+```python
+# Sync calls
+hooks.resetTriggerToken(hookGroupId, hookId) # -> result`
+hooks.resetTriggerToken(hookGroupId='value', hookId='value') # -> result
+# Async call
+await asyncHooks.resetTriggerToken(hookGroupId, hookId) # -> result
+await asyncHooks.resetTriggerToken(hookGroupId='value', hookId='value') # -> result
+```
+
+#### Trigger a hook with a token
+This endpoint triggers a defined hook with a valid token.
+
+The HTTP payload must match the hooks `triggerSchema`.  If it does, it is
+provided as the `payload` property of the JSON-e context used to render the
+task template.
+
+
+
+Takes the following arguments:
+
+  * `hookGroupId`
+  * `hookId`
+  * `token`
+
+Required [input schema](v1/trigger-hook.json#)
+
+Required [output schema](v1/task-status.json#)
+
+```python
+# Sync calls
+hooks.triggerHookWithToken(hookGroupId, hookId, token, payload) # -> result`
+hooks.triggerHookWithToken(payload, hookGroupId='value', hookId='value', token='value') # -> result
+# Async call
+await asyncHooks.triggerHookWithToken(hookGroupId, hookId, token, payload) # -> result
+await asyncHooks.triggerHookWithToken(payload, hookGroupId='value', hookId='value', token='value') # -> result
+```
+
+
+
+
+### Methods in `taskcluster.Index`
+```python
+import asyncio # Only for async 
+// Create Index client instance
+import taskcluster
+import taskcluster.aio
+
+index = taskcluster.Index(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncIndex = taskcluster.aio.Index(options, session=session)
+```
+The task index, typically available at `index.taskcluster.net`, is
+responsible for indexing tasks. The service ensures that tasks can be
+located by recency and/or arbitrary strings. Common use-cases include:
+
+ * Locate tasks by git or mercurial `<revision>`, or
+ * Locate latest task from given `<branch>`, such as a release.
+
+**Index hierarchy**, tasks are indexed in a dot (`.`) separated hierarchy
+called a namespace. For example a task could be indexed with the index path
+`some-app.<revision>.linux-64.release-build`. In this case the following
+namespaces is created.
+
+ 1. `some-app`,
+ 1. `some-app.<revision>`, and,
+ 2. `some-app.<revision>.linux-64`
+
+Inside the namespace `some-app.<revision>` you can find the namespace
+`some-app.<revision>.linux-64` inside which you can find the indexed task
+`some-app.<revision>.linux-64.release-build`. This is an example of indexing
+builds for a given platform and revision.
+
+**Task Rank**, when a task is indexed, it is assigned a `rank` (defaults
+to `0`). If another task is already indexed in the same namespace with
+lower or equal `rank`, the index for that task will be overwritten. For example
+consider index path `mozilla-central.linux-64.release-build`. In
+this case one might choose to use a UNIX timestamp or mercurial revision
+number as `rank`. This way the latest completed linux 64 bit release
+build is always available at `mozilla-central.linux-64.release-build`.
+
+Note that this does mean index paths are not immutable: the same path may
+point to a different task now than it did a moment ago.
+
+**Indexed Data**, when a task is retrieved from the index the result includes
+a `taskId` and an additional user-defined JSON blob that was indexed with
+the task.
+
+**Entry Expiration**, all indexed entries must have an expiration date.
+Typically this defaults to one year, if not specified. If you are
+indexing tasks to make it easy to find artifacts, consider using the
+artifact's expiration date.
+
+**Valid Characters**, all keys in a namespace `<key1>.<key2>` must be
+in the form `/[a-zA-Z0-9_!~*'()%-]+/`. Observe that this is URL-safe and
+that if you strictly want to put another character you can URL encode it.
+
+**Indexing Routes**, tasks can be indexed using the API below, but the
+most common way to index tasks is adding a custom route to `task.routes` of the
+form `index.<namespace>`. In order to add this route to a task you'll
+need the scope `queue:route:index.<namespace>`. When a task has
+this route, it will be indexed when the task is **completed successfully**.
+The task will be indexed with `rank`, `data` and `expires` as specified
+in `task.extra.index`. See the example below:
+
+```js
+{
+  payload:  { /* ... */ },
+  routes: [
+    // index.<namespace> prefixed routes, tasks CC'ed such a route will
+    // be indexed under the given <namespace>
+    "index.mozilla-central.linux-64.release-build",
+    "index.<revision>.linux-64.release-build"
+  ],
+  extra: {
+    // Optional details for indexing service
+    index: {
+      // Ordering, this taskId will overwrite any thing that has
+      // rank <= 4000 (defaults to zero)
+      rank:       4000,
+
+      // Specify when the entries expire (Defaults to 1 year)
+      expires:          new Date().toJSON(),
+
+      // A little informal data to store along with taskId
+      // (less 16 kb when encoded as JSON)
+      data: {
+        hgRevision:   "...",
+        commitMessae: "...",
+        whatever...
+      }
+    },
+    // Extra properties for other services...
+  }
+  // Other task properties...
+}
+```
+
+**Remark**, when indexing tasks using custom routes, it's also possible
+to listen for messages about these tasks. For
+example one could bind to `route.index.some-app.*.release-build`,
+and pick up all messages about release builds. Hence, it is a
+good idea to document task index hierarchies, as these make up extension
+points in their own.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+index.ping() # -> None`
+# Async call
+await asyncIndex.ping() # -> None
+```
+
+#### Find Indexed Task
+Find a task by index path, returning the highest-rank task with that path. If no
+task exists for the given path, this API end-point will respond with a 404 status.
+
+
+
+Takes the following arguments:
+
+  * `indexPath`
+
+Required [output schema](v1/indexed-task-response.json#)
+
+```python
+# Sync calls
+index.findTask(indexPath) # -> result`
+index.findTask(indexPath='value') # -> result
+# Async call
+await asyncIndex.findTask(indexPath) # -> result
+await asyncIndex.findTask(indexPath='value') # -> result
+```
+
+#### List Namespaces
+List the namespaces immediately under a given namespace.
+
+This endpoint
+lists up to 1000 namespaces. If more namespaces are present, a
+`continuationToken` will be returned, which can be given in the next
+request. For the initial request, the payload should be an empty JSON
+object.
+
+
+
+Takes the following arguments:
+
+  * `namespace`
+
+Required [output schema](v1/list-namespaces-response.json#)
+
+```python
+# Sync calls
+index.listNamespaces(namespace) # -> result`
+index.listNamespaces(namespace='value') # -> result
+# Async call
+await asyncIndex.listNamespaces(namespace) # -> result
+await asyncIndex.listNamespaces(namespace='value') # -> result
+```
+
+#### List Tasks
+List the tasks immediately under a given namespace.
+
+This endpoint
+lists up to 1000 tasks. If more tasks are present, a
+`continuationToken` will be returned, which can be given in the next
+request. For the initial request, the payload should be an empty JSON
+object.
+
+**Remark**, this end-point is designed for humans browsing for tasks, not
+services, as that makes little sense.
+
+
+
+Takes the following arguments:
+
+  * `namespace`
+
+Required [output schema](v1/list-tasks-response.json#)
+
+```python
+# Sync calls
+index.listTasks(namespace) # -> result`
+index.listTasks(namespace='value') # -> result
+# Async call
+await asyncIndex.listTasks(namespace) # -> result
+await asyncIndex.listTasks(namespace='value') # -> result
+```
+
+#### Insert Task into Index
+Insert a task into the index.  If the new rank is less than the existing rank
+at the given index path, the task is not indexed but the response is still 200 OK.
+
+Please see the introduction above for information
+about indexing successfully completed tasks automatically using custom routes.
+
+
+
+Takes the following arguments:
+
+  * `namespace`
+
+Required [input schema](v1/insert-task-request.json#)
+
+Required [output schema](v1/indexed-task-response.json#)
+
+```python
+# Sync calls
+index.insertTask(namespace, payload) # -> result`
+index.insertTask(payload, namespace='value') # -> result
+# Async call
+await asyncIndex.insertTask(namespace, payload) # -> result
+await asyncIndex.insertTask(payload, namespace='value') # -> result
+```
+
+#### Get Artifact From Indexed Task
+Find a task by index path and redirect to the artifact on the most recent
+run with the given `name`.
+
+Note that multiple calls to this endpoint may return artifacts from differen tasks
+if a new task is inserted into the index between calls. Avoid using this method as
+a stable link to multiple, connected files if the index path does not contain a
+unique identifier.  For example, the following two links may return unrelated files:
+* https://index.taskcluster.net/task/some-app.win64.latest.installer/artifacts/public/installer.exe`
+* https://index.taskcluster.net/task/some-app.win64.latest.installer/artifacts/public/debug-symbols.zip`
+
+This problem be remedied by including the revision in the index path or by bundling both
+installer and debug symbols into a single artifact.
+
+If no task exists for the given index path, this API end-point responds with 404.
+
+
+
+Takes the following arguments:
+
+  * `indexPath`
+  * `name`
+
+```python
+# Sync calls
+index.findArtifactFromTask(indexPath, name) # -> None`
+index.findArtifactFromTask(indexPath='value', name='value') # -> None
+# Async call
+await asyncIndex.findArtifactFromTask(indexPath, name) # -> None
+await asyncIndex.findArtifactFromTask(indexPath='value', name='value') # -> None
+```
+
+
+
+
+### Methods in `taskcluster.Login`
+```python
+import asyncio # Only for async 
+// Create Login client instance
+import taskcluster
+import taskcluster.aio
+
+login = taskcluster.Login(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncLogin = taskcluster.aio.Login(options, session=session)
+```
+The Login service serves as the interface between external authentication
+systems and Taskcluster credentials.
+#### Get Taskcluster credentials given a suitable `access_token`
+Given an OIDC `access_token` from a trusted OpenID provider, return a
+set of Taskcluster credentials for use on behalf of the identified
+user.
+
+This method is typically not called with a Taskcluster client library
+and does not accept Hawk credentials. The `access_token` should be
+given in an `Authorization` header:
+```
+Authorization: Bearer abc.xyz
+```
+
+The `access_token` is first verified against the named
+:provider, then passed to the provider's API to retrieve a user
+profile. That profile is then used to generate Taskcluster credentials
+appropriate to the user. Note that the resulting credentials may or may
+not include a `certificate` property. Callers should be prepared for either
+alternative.
+
+The given credentials will expire in a relatively short time. Callers should
+monitor this expiration and refresh the credentials if necessary, by calling
+this endpoint again, if they have expired.
+
+
+
+Takes the following arguments:
+
+  * `provider`
+
+Required [output schema](http://schemas.taskcluster.net/login/v1/oidc-credentials-response.json)
+
+```python
+# Sync calls
+login.oidcCredentials(provider) # -> result`
+login.oidcCredentials(provider='value') # -> result
+# Async call
+await asyncLogin.oidcCredentials(provider) # -> result
+await asyncLogin.oidcCredentials(provider='value') # -> result
+```
+
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+login.ping() # -> None`
+# Async call
+await asyncLogin.ping() # -> None
+```
+
+
+
+
+### Methods in `taskcluster.Notify`
+```python
+import asyncio # Only for async 
+// Create Notify client instance
+import taskcluster
+import taskcluster.aio
+
+notify = taskcluster.Notify(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncNotify = taskcluster.aio.Notify(options, session=session)
+```
+The notification service, typically available at `notify.taskcluster.net`
+listens for tasks with associated notifications and handles requests to
+send emails and post pulse messages.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+notify.ping() # -> None`
+# Async call
+await asyncNotify.ping() # -> None
+```
+
+#### Send an Email
+Send an email to `address`. The content is markdown and will be rendered
+to HTML, but both the HTML and raw markdown text will be sent in the
+email. If a link is included, it will be rendered to a nice button in the
+HTML version of the email
+
+
+Required [input schema](v1/email-request.json#)
+
+```python
+# Sync calls
+notify.email(payload) # -> None`
+# Async call
+await asyncNotify.email(payload) # -> None
+```
+
+#### Publish a Pulse Message
+Publish a message on pulse with the given `routingKey`.
+
+
+Required [input schema](v1/pulse-request.json#)
+
+```python
+# Sync calls
+notify.pulse(payload) # -> None`
+# Async call
+await asyncNotify.pulse(payload) # -> None
+```
+
+#### Post IRC Message
+Post a message on IRC to a specific channel or user, or a specific user
+on a specific channel.
+
+Success of this API method does not imply the message was successfully
+posted. This API method merely inserts the IRC message into a queue
+that will be processed by a background process.
+This allows us to re-send the message in face of connection issues.
+
+However, if the user isn't online the message will be dropped without
+error. We maybe improve this behavior in the future. For now just keep
+in mind that IRC is a best-effort service.
+
+
+Required [input schema](v1/irc-request.json#)
+
+```python
+# Sync calls
+notify.irc(payload) # -> None`
+# Async call
+await asyncNotify.irc(payload) # -> None
+```
+
+
+
+
+### Methods in `taskcluster.PurgeCache`
+```python
+import asyncio # Only for async 
+// Create PurgeCache client instance
+import taskcluster
+import taskcluster.aio
+
+purgeCache = taskcluster.PurgeCache(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncPurgeCache = taskcluster.aio.PurgeCache(options, session=session)
+```
+The purge-cache service, typically available at
+`purge-cache.taskcluster.net`, is responsible for publishing a pulse
+message for workers, so they can purge cache upon request.
+
+This document describes the API end-point for publishing the pulse
+message. This is mainly intended to be used by tools.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+purgeCache.ping() # -> None`
+# Async call
+await asyncPurgeCache.ping() # -> None
+```
+
+#### Purge Worker Cache
+Publish a purge-cache message to purge caches named `cacheName` with
+`provisionerId` and `workerType` in the routing-key. Workers should
+be listening for this message and purge caches when they see it.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [input schema](v1/purge-cache-request.json#)
+
+```python
+# Sync calls
+purgeCache.purgeCache(provisionerId, workerType, payload) # -> None`
+purgeCache.purgeCache(payload, provisionerId='value', workerType='value') # -> None
+# Async call
+await asyncPurgeCache.purgeCache(provisionerId, workerType, payload) # -> None
+await asyncPurgeCache.purgeCache(payload, provisionerId='value', workerType='value') # -> None
+```
+
+#### All Open Purge Requests
+This is useful mostly for administors to view
+the set of open purge requests. It should not
+be used by workers. They should use the purgeRequests
+endpoint that is specific to their workerType and
+provisionerId.
+
+
+Required [output schema](v1/all-purge-cache-request-list.json#)
+
+```python
+# Sync calls
+purgeCache.allPurgeRequests() # -> result`
+# Async call
+await asyncPurgeCache.allPurgeRequests() # -> result
+```
+
+#### Open Purge Requests for a provisionerId/workerType pair
+List of caches that need to be purged if they are from before
+a certain time. This is safe to be used in automation from
+workers.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [output schema](v1/purge-cache-request-list.json#)
+
+```python
+# Sync calls
+purgeCache.purgeRequests(provisionerId, workerType) # -> result`
+purgeCache.purgeRequests(provisionerId='value', workerType='value') # -> result
+# Async call
+await asyncPurgeCache.purgeRequests(provisionerId, workerType) # -> result
+await asyncPurgeCache.purgeRequests(provisionerId='value', workerType='value') # -> result
+```
+
+
+
+
+### Exchanges in `taskcluster.PurgeCacheEvents`
+```python
+// Create PurgeCacheEvents client instance
+import taskcluster
+purgeCacheEvents = taskcluster.PurgeCacheEvents(options)
+```
+The purge-cache service, typically available at
+`purge-cache.taskcluster.net`, is responsible for publishing a pulse
+message for workers, so they can purge cache upon request.
+
+This document describes the exchange offered for workers by the
+cache-purge service.
+#### Purge Cache Messages
+ * `purgeCacheEvents.purgeCache(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `provisionerId` is required  Description: `provisionerId` under which to purge cache.
+   * `workerType` is required  Description: `workerType` for which to purge cache.
+
+
+
+
+### Methods in `taskcluster.Queue`
+```python
+import asyncio # Only for async 
+// Create Queue client instance
+import taskcluster
+import taskcluster.aio
+
+queue = taskcluster.Queue(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncQueue = taskcluster.aio.Queue(options, session=session)
+```
+The queue, typically available at `queue.taskcluster.net`, is responsible
+for accepting tasks and track their state as they are executed by
+workers. In order ensure they are eventually resolved.
+
+This document describes the API end-points offered by the queue. These 
+end-points targets the following audience:
+ * Schedulers, who create tasks to be executed,
+ * Workers, who execute tasks, and
+ * Tools, that wants to inspect the state of a task.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+queue.ping() # -> None`
+# Async call
+await asyncQueue.ping() # -> None
+```
+
+#### Get Task Definition
+This end-point will return the task-definition. Notice that the task
+definition may have been modified by queue, if an optional property is
+not specified the queue may provide a default value.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/task.json#)
+
+```python
+# Sync calls
+queue.task(taskId) # -> result`
+queue.task(taskId='value') # -> result
+# Async call
+await asyncQueue.task(taskId) # -> result
+await asyncQueue.task(taskId='value') # -> result
+```
+
+#### Get task status
+Get task status structure from `taskId`
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.status(taskId) # -> result`
+queue.status(taskId='value') # -> result
+# Async call
+await asyncQueue.status(taskId) # -> result
+await asyncQueue.status(taskId='value') # -> result
+```
+
+#### List Task Group
+List tasks sharing the same `taskGroupId`.
+
+As a task-group may contain an unbounded number of tasks, this end-point
+may return a `continuationToken`. To continue listing tasks you must call
+the `listTaskGroup` again with the `continuationToken` as the
+query-string option `continuationToken`.
+
+By default this end-point will try to return up to 1000 members in one
+request. But it **may return less**, even if more tasks are available.
+It may also return a `continuationToken` even though there are no more
+results. However, you can only be sure to have seen all results if you
+keep calling `listTaskGroup` with the last `continuationToken` until you
+get a result without a `continuationToken`.
+
+If you are not interested in listing all the members at once, you may
+use the query-string option `limit` to return fewer.
+
+
+
+Takes the following arguments:
+
+  * `taskGroupId`
+
+Required [output schema](v1/list-task-group-response.json#)
+
+```python
+# Sync calls
+queue.listTaskGroup(taskGroupId) # -> result`
+queue.listTaskGroup(taskGroupId='value') # -> result
+# Async call
+await asyncQueue.listTaskGroup(taskGroupId) # -> result
+await asyncQueue.listTaskGroup(taskGroupId='value') # -> result
+```
+
+#### List Dependent Tasks
+List tasks that depend on the given `taskId`.
+
+As many tasks from different task-groups may dependent on a single tasks,
+this end-point may return a `continuationToken`. To continue listing
+tasks you must call `listDependentTasks` again with the
+`continuationToken` as the query-string option `continuationToken`.
+
+By default this end-point will try to return up to 1000 tasks in one
+request. But it **may return less**, even if more tasks are available.
+It may also return a `continuationToken` even though there are no more
+results. However, you can only be sure to have seen all results if you
+keep calling `listDependentTasks` with the last `continuationToken` until
+you get a result without a `continuationToken`.
+
+If you are not interested in listing all the tasks at once, you may
+use the query-string option `limit` to return fewer.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/list-dependent-tasks-response.json#)
+
+```python
+# Sync calls
+queue.listDependentTasks(taskId) # -> result`
+queue.listDependentTasks(taskId='value') # -> result
+# Async call
+await asyncQueue.listDependentTasks(taskId) # -> result
+await asyncQueue.listDependentTasks(taskId='value') # -> result
+```
+
+#### Create New Task
+Create a new task, this is an **idempotent** operation, so repeat it if
+you get an internal server error or network connection is dropped.
+
+**Task `deadline`**: the deadline property can be no more than 5 days
+into the future. This is to limit the amount of pending tasks not being
+taken care of. Ideally, you should use a much shorter deadline.
+
+**Task expiration**: the `expires` property must be greater than the
+task `deadline`. If not provided it will default to `deadline` + one
+year. Notice, that artifacts created by task must expire before the task.
+
+**Task specific routing-keys**: using the `task.routes` property you may
+define task specific routing-keys. If a task has a task specific 
+routing-key: `<route>`, then when the AMQP message about the task is
+published, the message will be CC'ed with the routing-key: 
+`route.<route>`. This is useful if you want another component to listen
+for completed tasks you have posted.  The caller must have scope
+`queue:route:<route>` for each route.
+
+**Dependencies**: any tasks referenced in `task.dependencies` must have
+already been created at the time of this call.
+
+**Scopes**: Note that the scopes required to complete this API call depend
+on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
+`provisionerId`, and `workerType` properties of the task definition.
+
+**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
+the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
+legacy and should not be used. Note that the new, non-legacy scopes require
+a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [input schema](v1/create-task-request.json#)
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.createTask(taskId, payload) # -> result`
+queue.createTask(payload, taskId='value') # -> result
+# Async call
+await asyncQueue.createTask(taskId, payload) # -> result
+await asyncQueue.createTask(payload, taskId='value') # -> result
+```
+
+#### Define Task
+**Deprecated**, this is the same as `createTask` with a **self-dependency**.
+This is only present for legacy.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [input schema](v1/create-task-request.json#)
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.defineTask(taskId, payload) # -> result`
+queue.defineTask(payload, taskId='value') # -> result
+# Async call
+await asyncQueue.defineTask(taskId, payload) # -> result
+await asyncQueue.defineTask(payload, taskId='value') # -> result
+```
+
+#### Schedule Defined Task
+scheduleTask will schedule a task to be executed, even if it has
+unresolved dependencies. A task would otherwise only be scheduled if
+its dependencies were resolved.
+
+This is useful if you have defined a task that depends on itself or on
+some other task that has not been resolved, but you wish the task to be
+scheduled immediately.
+
+This will announce the task as pending and workers will be allowed to
+claim it and resolve the task.
+
+**Note** this operation is **idempotent** and will not fail or complain
+if called with a `taskId` that is already scheduled, or even resolved.
+To reschedule a task previously resolved, use `rerunTask`.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.scheduleTask(taskId) # -> result`
+queue.scheduleTask(taskId='value') # -> result
+# Async call
+await asyncQueue.scheduleTask(taskId) # -> result
+await asyncQueue.scheduleTask(taskId='value') # -> result
+```
+
+#### Rerun a Resolved Task
+This method _reruns_ a previously resolved task, even if it was
+_completed_. This is useful if your task completes unsuccessfully, and
+you just want to run it from scratch again. This will also reset the
+number of `retries` allowed.
+
+Remember that `retries` in the task status counts the number of runs that
+the queue have started because the worker stopped responding, for example
+because a spot node died.
+
+**Remark** this operation is idempotent, if you try to rerun a task that
+is not either `failed` or `completed`, this operation will just return
+the current task status.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.rerunTask(taskId) # -> result`
+queue.rerunTask(taskId='value') # -> result
+# Async call
+await asyncQueue.rerunTask(taskId) # -> result
+await asyncQueue.rerunTask(taskId='value') # -> result
+```
+
+#### Cancel Task
+This method will cancel a task that is either `unscheduled`, `pending` or
+`running`. It will resolve the current run as `exception` with
+`reasonResolved` set to `canceled`. If the task isn't scheduled yet, ie.
+it doesn't have any runs, an initial run will be added and resolved as
+described above. Hence, after canceling a task, it cannot be scheduled
+with `queue.scheduleTask`, but a new run can be created with
+`queue.rerun`. These semantics is equivalent to calling
+`queue.scheduleTask` immediately followed by `queue.cancelTask`.
+
+**Remark** this operation is idempotent, if you try to cancel a task that
+isn't `unscheduled`, `pending` or `running`, this operation will just
+return the current task status.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.cancelTask(taskId) # -> result`
+queue.cancelTask(taskId='value') # -> result
+# Async call
+await asyncQueue.cancelTask(taskId) # -> result
+await asyncQueue.cancelTask(taskId='value') # -> result
+```
+
+#### Claim Work
+Claim pending task(s) for the given `provisionerId`/`workerType` queue.
+
+If any work is available (even if fewer than the requested number of
+tasks, this will return immediately. Otherwise, it will block for tens of
+seconds waiting for work.  If no work appears, it will return an emtpy
+list of tasks.  Callers should sleep a short while (to avoid denial of
+service in an error condition) and call the endpoint again.  This is a
+simple implementation of "long polling".
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [input schema](v1/claim-work-request.json#)
+
+Required [output schema](v1/claim-work-response.json#)
+
+```python
+# Sync calls
+queue.claimWork(provisionerId, workerType, payload) # -> result`
+queue.claimWork(payload, provisionerId='value', workerType='value') # -> result
+# Async call
+await asyncQueue.claimWork(provisionerId, workerType, payload) # -> result
+await asyncQueue.claimWork(payload, provisionerId='value', workerType='value') # -> result
+```
+
+#### Claim Task
+claim a task - never documented
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+
+Required [input schema](v1/task-claim-request.json#)
+
+Required [output schema](v1/task-claim-response.json#)
+
+```python
+# Sync calls
+queue.claimTask(taskId, runId, payload) # -> result`
+queue.claimTask(payload, taskId='value', runId='value') # -> result
+# Async call
+await asyncQueue.claimTask(taskId, runId, payload) # -> result
+await asyncQueue.claimTask(payload, taskId='value', runId='value') # -> result
+```
+
+#### Reclaim task
+Refresh the claim for a specific `runId` for given `taskId`. This updates
+the `takenUntil` property and returns a new set of temporary credentials
+for performing requests on behalf of the task. These credentials should
+be used in-place of the credentials returned by `claimWork`.
+
+The `reclaimTask` requests serves to:
+ * Postpone `takenUntil` preventing the queue from resolving
+   `claim-expired`,
+ * Refresh temporary credentials used for processing the task, and
+ * Abort execution if the task/run have been resolved.
+
+If the `takenUntil` timestamp is exceeded the queue will resolve the run
+as _exception_ with reason `claim-expired`, and proceeded to retry to the
+task. This ensures that tasks are retried, even if workers disappear
+without warning.
+
+If the task is resolved, this end-point will return `409` reporting
+`RequestConflict`. This typically happens if the task have been canceled
+or the `task.deadline` have been exceeded. If reclaiming fails, workers
+should abort the task and forget about the given `runId`. There is no
+need to resolve the run or upload artifacts.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+
+Required [output schema](v1/task-reclaim-response.json#)
+
+```python
+# Sync calls
+queue.reclaimTask(taskId, runId) # -> result`
+queue.reclaimTask(taskId='value', runId='value') # -> result
+# Async call
+await asyncQueue.reclaimTask(taskId, runId) # -> result
+await asyncQueue.reclaimTask(taskId='value', runId='value') # -> result
+```
+
+#### Report Run Completed
+Report a task completed, resolving the run as `completed`.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.reportCompleted(taskId, runId) # -> result`
+queue.reportCompleted(taskId='value', runId='value') # -> result
+# Async call
+await asyncQueue.reportCompleted(taskId, runId) # -> result
+await asyncQueue.reportCompleted(taskId='value', runId='value') # -> result
+```
+
+#### Report Run Failed
+Report a run failed, resolving the run as `failed`. Use this to resolve
+a run that failed because the task specific code behaved unexpectedly.
+For example the task exited non-zero, or didn't produce expected output.
+
+Do not use this if the task couldn't be run because if malformed
+payload, or other unexpected condition. In these cases we have a task
+exception, which should be reported with `reportException`.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.reportFailed(taskId, runId) # -> result`
+queue.reportFailed(taskId='value', runId='value') # -> result
+# Async call
+await asyncQueue.reportFailed(taskId, runId) # -> result
+await asyncQueue.reportFailed(taskId='value', runId='value') # -> result
+```
+
+#### Report Task Exception
+Resolve a run as _exception_. Generally, you will want to report tasks as
+failed instead of exception. You should `reportException` if,
+
+  * The `task.payload` is invalid,
+  * Non-existent resources are referenced,
+  * Declared actions cannot be executed due to unavailable resources,
+  * The worker had to shutdown prematurely,
+  * The worker experienced an unknown error, or,
+  * The task explicitly requested a retry.
+
+Do not use this to signal that some user-specified code crashed for any
+reason specific to this code. If user-specific code hits a resource that
+is temporarily unavailable worker should report task _failed_.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+
+Required [input schema](v1/task-exception-request.json#)
+
+Required [output schema](v1/task-status-response.json#)
+
+```python
+# Sync calls
+queue.reportException(taskId, runId, payload) # -> result`
+queue.reportException(payload, taskId='value', runId='value') # -> result
+# Async call
+await asyncQueue.reportException(taskId, runId, payload) # -> result
+await asyncQueue.reportException(payload, taskId='value', runId='value') # -> result
+```
+
+#### Create Artifact
+This API end-point creates an artifact for a specific run of a task. This
+should **only** be used by a worker currently operating on this task, or
+from a process running within the task (ie. on the worker).
+
+All artifacts must specify when they `expires`, the queue will
+automatically take care of deleting artifacts past their
+expiration point. This features makes it feasible to upload large
+intermediate artifacts from data processing applications, as the
+artifacts can be set to expire a few days later.
+
+We currently support 3 different `storageType`s, each storage type have
+slightly different features and in some cases difference semantics.
+We also have 2 deprecated `storageType`s which are only maintained for
+backwards compatiability and should not be used in new implementations
+
+**Blob artifacts**, are useful for storing large files.  Currently, these
+are all stored in S3 but there are facilities for adding support for other
+backends in futre.  A call for this type of artifact must provide information
+about the file which will be uploaded.  This includes sha256 sums and sizes.
+This method will return a list of general form HTTP requests which are signed
+by AWS S3 credentials managed by the Queue.  Once these requests are completed
+the list of `ETag` values returned by the requests must be passed to the
+queue `completeArtifact` method
+
+**S3 artifacts**, DEPRECATED is useful for static files which will be
+stored on S3. When creating an S3 artifact the queue will return a
+pre-signed URL to which you can do a `PUT` request to upload your
+artifact. Note that `PUT` request **must** specify the `content-length`
+header and **must** give the `content-type` header the same value as in
+the request to `createArtifact`.
+
+**Azure artifacts**, DEPRECATED are stored in _Azure Blob Storage_ service
+which given the consistency guarantees and API interface offered by Azure
+is more suitable for artifacts that will be modified during the execution
+of the task. For example docker-worker has a feature that persists the
+task log to Azure Blob Storage every few seconds creating a somewhat
+live log. A request to create an Azure artifact will return a URL
+featuring a [Shared-Access-Signature](http://msdn.microsoft.com/en-us/library/azure/dn140256.aspx),
+refer to MSDN for further information on how to use these.
+**Warning: azure artifact is currently an experimental feature subject
+to changes and data-drops.**
+
+**Reference artifacts**, only consists of meta-data which the queue will
+store for you. These artifacts really only have a `url` property and
+when the artifact is requested the client will be redirect the URL
+provided with a `303` (See Other) redirect. Please note that we cannot
+delete artifacts you upload to other service, we can only delete the
+reference to the artifact, when it expires.
+
+**Error artifacts**, only consists of meta-data which the queue will
+store for you. These artifacts are only meant to indicate that you the
+worker or the task failed to generate a specific artifact, that you
+would otherwise have uploaded. For example docker-worker will upload an
+error artifact, if the file it was supposed to upload doesn't exists or
+turns out to be a directory. Clients requesting an error artifact will
+get a `424` (Failed Dependency) response. This is mainly designed to
+ensure that dependent tasks can distinguish between artifacts that were
+suppose to be generated and artifacts for which the name is misspelled.
+
+**Artifact immutability**, generally speaking you cannot overwrite an
+artifact when created. But if you repeat the request with the same
+properties the request will succeed as the operation is idempotent.
+This is useful if you need to refresh a signed URL while uploading.
+Do not abuse this to overwrite artifacts created by another entity!
+Such as worker-host overwriting artifact created by worker-code.
+
+As a special case the `url` property on _reference artifacts_ can be
+updated. You should only use this to update the `url` property for
+reference artifacts your process has created.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+  * `name`
+
+Required [input schema](v1/post-artifact-request.json#)
+
+Required [output schema](v1/post-artifact-response.json#)
+
+```python
+# Sync calls
+queue.createArtifact(taskId, runId, name, payload) # -> result`
+queue.createArtifact(payload, taskId='value', runId='value', name='value') # -> result
+# Async call
+await asyncQueue.createArtifact(taskId, runId, name, payload) # -> result
+await asyncQueue.createArtifact(payload, taskId='value', runId='value', name='value') # -> result
+```
+
+#### Complete Artifact
+This endpoint finalises an upload done through the blob `storageType`.
+The queue will ensure that the task/run is still allowing artifacts
+to be uploaded.  For single-part S3 blob artifacts, this endpoint
+will simply ensure the artifact is present in S3.  For multipart S3
+artifacts, the endpoint will perform the commit step of the multipart
+upload flow.  As the final step for both multi and single part artifacts,
+the `present` entity field will be set to `true` to reflect that the
+artifact is now present and a message published to pulse.  NOTE: This
+endpoint *must* be called for all artifacts of storageType 'blob'
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+  * `name`
+
+Required [input schema](v1/put-artifact-request.json#)
+
+```python
+# Sync calls
+queue.completeArtifact(taskId, runId, name, payload) # -> None`
+queue.completeArtifact(payload, taskId='value', runId='value', name='value') # -> None
+# Async call
+await asyncQueue.completeArtifact(taskId, runId, name, payload) # -> None
+await asyncQueue.completeArtifact(payload, taskId='value', runId='value', name='value') # -> None
+```
+
+#### Get Artifact from Run
+Get artifact by `<name>` from a specific run.
+
+**Public Artifacts**, in-order to get an artifact you need the scope
+`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
+But if the artifact `name` starts with `public/`, authentication and
+authorization is not necessary to fetch the artifact.
+
+**API Clients**, this method will redirect you to the artifact, if it is
+stored externally. Either way, the response may not be JSON. So API
+client users might want to generate a signed URL for this end-point and
+use that URL with an HTTP client that can handle responses correctly.
+
+**Downloading artifacts**
+There are some special considerations for those http clients which download
+artifacts.  This api endpoint is designed to be compatible with an HTTP 1.1
+compliant client, but has extra features to ensure the download is valid.
+It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
+taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
+artifacts.
+
+In order to download an artifact the following must be done:
+
+1. Obtain queue url.  Building a signed url with a taskcluster client is
+recommended
+1. Make a GET request which does not follow redirects
+1. In all cases, if specified, the
+x-taskcluster-location-{content,transfer}-{sha256,length} values must be
+validated to be equal to the Content-Length and Sha256 checksum of the
+final artifact downloaded. as well as any intermediate redirects
+1. If this response is a 500-series error, retry using an exponential
+backoff.  No more than 5 retries should be attempted
+1. If this response is a 400-series error, treat it appropriately for
+your context.  This might be an error in responding to this request or
+an Error storage type body.  This request should not be retried.
+1. If this response is a 200-series response, the response body is the artifact.
+If the x-taskcluster-location-{content,transfer}-{sha256,length} and
+x-taskcluster-location-content-encoding are specified, they should match
+this response body
+1. If the response type is a 300-series redirect, the artifact will be at the
+location specified by the `Location` header.  There are multiple artifact storage
+types which use a 300-series redirect.
+1. For all redirects followed, the user must verify that the content-sha256, content-length,
+transfer-sha256, transfer-length and content-encoding match every further request.  The final
+artifact must also be validated against the values specified in the original queue response
+1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
+must not occur
+1. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
+have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
+must be treated as an error
+
+**Headers**
+The following important headers are set on the response to this method:
+
+* location: the url of the artifact if a redirect is to be performed
+* x-taskcluster-artifact-storage-type: the storage type.  Example: blob, s3, error
+
+The following important headers are set on responses to this method for Blob artifacts
+
+* x-taskcluster-location-content-sha256: the SHA256 of the artifact
+*after* any content-encoding is undone.  Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
+* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
+is undone
+* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
+*before* any content-encoding is undone.  This is the SHA256 of what is sent over
+the wire.  Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
+* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
+is undone
+* x-taskcluster-location-content-encoding: the content-encoding used.  It will either
+be `gzip` or `identity` right now.  This is hardcoded to a value set when the artifact
+was created and no content-negotiation occurs
+* x-taskcluster-location-content-type: the content-type of the artifact
+
+**Caching**, artifacts may be cached in data centers closer to the
+workers in-order to reduce bandwidth costs. This can lead to longer
+response times. Caching can be skipped by setting the header
+`x-taskcluster-skip-cache: true`, this should only be used for resources
+where request volume is known to be low, and caching not useful.
+(This feature may be disabled in the future, use is sparingly!)
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+  * `name`
+
+```python
+# Sync calls
+queue.getArtifact(taskId, runId, name) # -> None`
+queue.getArtifact(taskId='value', runId='value', name='value') # -> None
+# Async call
+await asyncQueue.getArtifact(taskId, runId, name) # -> None
+await asyncQueue.getArtifact(taskId='value', runId='value', name='value') # -> None
+```
+
+#### Get Artifact from Latest Run
+Get artifact by `<name>` from the last run of a task.
+
+**Public Artifacts**, in-order to get an artifact you need the scope
+`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
+But if the artifact `name` starts with `public/`, authentication and
+authorization is not necessary to fetch the artifact.
+
+**API Clients**, this method will redirect you to the artifact, if it is
+stored externally. Either way, the response may not be JSON. So API
+client users might want to generate a signed URL for this end-point and
+use that URL with a normal HTTP client.
+
+**Remark**, this end-point is slightly slower than
+`queue.getArtifact`, so consider that if you already know the `runId` of
+the latest run. Otherwise, just us the most convenient API end-point.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `name`
+
+```python
+# Sync calls
+queue.getLatestArtifact(taskId, name) # -> None`
+queue.getLatestArtifact(taskId='value', name='value') # -> None
+# Async call
+await asyncQueue.getLatestArtifact(taskId, name) # -> None
+await asyncQueue.getLatestArtifact(taskId='value', name='value') # -> None
+```
+
+#### Get Artifacts from Run
+Returns a list of artifacts and associated meta-data for a given run.
+
+As a task may have many artifacts paging may be necessary. If this
+end-point returns a `continuationToken`, you should call the end-point
+again with the `continuationToken` as the query-string option:
+`continuationToken`.
+
+By default this end-point will list up-to 1000 artifacts in a single page
+you may limit this with the query-string parameter `limit`.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+  * `runId`
+
+Required [output schema](v1/list-artifacts-response.json#)
+
+```python
+# Sync calls
+queue.listArtifacts(taskId, runId) # -> result`
+queue.listArtifacts(taskId='value', runId='value') # -> result
+# Async call
+await asyncQueue.listArtifacts(taskId, runId) # -> result
+await asyncQueue.listArtifacts(taskId='value', runId='value') # -> result
+```
+
+#### Get Artifacts from Latest Run
+Returns a list of artifacts and associated meta-data for the latest run
+from the given task.
+
+As a task may have many artifacts paging may be necessary. If this
+end-point returns a `continuationToken`, you should call the end-point
+again with the `continuationToken` as the query-string option:
+`continuationToken`.
+
+By default this end-point will list up-to 1000 artifacts in a single page
+you may limit this with the query-string parameter `limit`.
+
+
+
+Takes the following arguments:
+
+  * `taskId`
+
+Required [output schema](v1/list-artifacts-response.json#)
+
+```python
+# Sync calls
+queue.listLatestArtifacts(taskId) # -> result`
+queue.listLatestArtifacts(taskId='value') # -> result
+# Async call
+await asyncQueue.listLatestArtifacts(taskId) # -> result
+await asyncQueue.listLatestArtifacts(taskId='value') # -> result
+```
+
+#### Get a list of all active provisioners
+Get all active provisioners.
+
+The term "provisioner" is taken broadly to mean anything with a provisionerId.
+This does not necessarily mean there is an associated service performing any
+provisioning activity.
+
+The response is paged. If this end-point returns a `continuationToken`, you
+should call the end-point again with the `continuationToken` as a query-string
+option. By default this end-point will list up to 1000 provisioners in a single
+page. You may limit this with the query-string parameter `limit`.
+
+
+Required [output schema](v1/list-provisioners-response.json#)
+
+```python
+# Sync calls
+queue.listProvisioners() # -> result`
+# Async call
+await asyncQueue.listProvisioners() # -> result
+```
+
+#### Get an active provisioner
+Get an active provisioner.
+
+The term "provisioner" is taken broadly to mean anything with a provisionerId.
+This does not necessarily mean there is an associated service performing any
+provisioning activity.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+
+Required [output schema](v1/provisioner-response.json#)
+
+```python
+# Sync calls
+queue.getProvisioner(provisionerId) # -> result`
+queue.getProvisioner(provisionerId='value') # -> result
+# Async call
+await asyncQueue.getProvisioner(provisionerId) # -> result
+await asyncQueue.getProvisioner(provisionerId='value') # -> result
+```
+
+#### Update a provisioner
+Declare a provisioner, supplying some details about it.
+
+`declareProvisioner` allows updating one or more properties of a provisioner as long as the required scopes are
+possessed. For example, a request to update the `aws-provisioner-v1`
+provisioner with a body `{description: 'This provisioner is great'}` would require you to have the scope
+`queue:declare-provisioner:aws-provisioner-v1#description`.
+
+The term "provisioner" is taken broadly to mean anything with a provisionerId.
+This does not necessarily mean there is an associated service performing any
+provisioning activity.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+
+Required [input schema](v1/update-provisioner-request.json#)
+
+Required [output schema](v1/provisioner-response.json#)
+
+```python
+# Sync calls
+queue.declareProvisioner(provisionerId, payload) # -> result`
+queue.declareProvisioner(payload, provisionerId='value') # -> result
+# Async call
+await asyncQueue.declareProvisioner(provisionerId, payload) # -> result
+await asyncQueue.declareProvisioner(payload, provisionerId='value') # -> result
+```
+
+#### Get Number of Pending Tasks
+Get an approximate number of pending tasks for the given `provisionerId`
+and `workerType`.
+
+The underlying Azure Storage Queues only promises to give us an estimate.
+Furthermore, we cache the result in memory for 20 seconds. So consumers
+should be no means expect this to be an accurate number.
+It is, however, a solid estimate of the number of pending tasks.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [output schema](v1/pending-tasks-response.json#)
+
+```python
+# Sync calls
+queue.pendingTasks(provisionerId, workerType) # -> result`
+queue.pendingTasks(provisionerId='value', workerType='value') # -> result
+# Async call
+await asyncQueue.pendingTasks(provisionerId, workerType) # -> result
+await asyncQueue.pendingTasks(provisionerId='value', workerType='value') # -> result
+```
+
+#### Get a list of all active worker-types
+Get all active worker-types for the given provisioner.
+
+The response is paged. If this end-point returns a `continuationToken`, you
+should call the end-point again with the `continuationToken` as a query-string
+option. By default this end-point will list up to 1000 worker-types in a single
+page. You may limit this with the query-string parameter `limit`.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+
+Required [output schema](v1/list-workertypes-response.json#)
+
+```python
+# Sync calls
+queue.listWorkerTypes(provisionerId) # -> result`
+queue.listWorkerTypes(provisionerId='value') # -> result
+# Async call
+await asyncQueue.listWorkerTypes(provisionerId) # -> result
+await asyncQueue.listWorkerTypes(provisionerId='value') # -> result
+```
+
+#### Get a worker-type
+Get a worker-type from a provisioner.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [output schema](v1/workertype-response.json#)
+
+```python
+# Sync calls
+queue.getWorkerType(provisionerId, workerType) # -> result`
+queue.getWorkerType(provisionerId='value', workerType='value') # -> result
+# Async call
+await asyncQueue.getWorkerType(provisionerId, workerType) # -> result
+await asyncQueue.getWorkerType(provisionerId='value', workerType='value') # -> result
+```
+
+#### Update a worker-type
+Declare a workerType, supplying some details about it.
+
+`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
+possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
+provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
+`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [input schema](v1/update-workertype-request.json#)
+
+Required [output schema](v1/workertype-response.json#)
+
+```python
+# Sync calls
+queue.declareWorkerType(provisionerId, workerType, payload) # -> result`
+queue.declareWorkerType(payload, provisionerId='value', workerType='value') # -> result
+# Async call
+await asyncQueue.declareWorkerType(provisionerId, workerType, payload) # -> result
+await asyncQueue.declareWorkerType(payload, provisionerId='value', workerType='value') # -> result
+```
+
+#### Get a list of all active workers of a workerType
+Get a list of all active workers of a workerType.
+
+`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
+To filter the query, you should call the end-point with `quarantined` as a query-string option with a
+true or false value.
+
+The response is paged. If this end-point returns a `continuationToken`, you
+should call the end-point again with the `continuationToken` as a query-string
+option. By default this end-point will list up to 1000 workers in a single
+page. You may limit this with the query-string parameter `limit`.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+
+Required [output schema](v1/list-workers-response.json#)
+
+```python
+# Sync calls
+queue.listWorkers(provisionerId, workerType) # -> result`
+queue.listWorkers(provisionerId='value', workerType='value') # -> result
+# Async call
+await asyncQueue.listWorkers(provisionerId, workerType) # -> result
+await asyncQueue.listWorkers(provisionerId='value', workerType='value') # -> result
+```
+
+#### Get a worker-type
+Get a worker from a worker-type.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+  * `workerGroup`
+  * `workerId`
+
+Required [output schema](v1/worker-response.json#)
+
+```python
+# Sync calls
+queue.getWorker(provisionerId, workerType, workerGroup, workerId) # -> result`
+queue.getWorker(provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
+# Async call
+await asyncQueue.getWorker(provisionerId, workerType, workerGroup, workerId) # -> result
+await asyncQueue.getWorker(provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
+```
+
+#### Quarantine a worker
+Quarantine a worker
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+  * `workerGroup`
+  * `workerId`
+
+Required [input schema](v1/quarantine-worker-request.json#)
+
+Required [output schema](v1/worker-response.json#)
+
+```python
+# Sync calls
+queue.quarantineWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result`
+queue.quarantineWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
+# Async call
+await asyncQueue.quarantineWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result
+await asyncQueue.quarantineWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
+```
+
+#### Declare a worker
+Declare a worker, supplying some details about it.
+
+`declareWorker` allows updating one or more properties of a worker as long as the required scopes are
+possessed.
+
+
+
+Takes the following arguments:
+
+  * `provisionerId`
+  * `workerType`
+  * `workerGroup`
+  * `workerId`
+
+Required [input schema](v1/update-worker-request.json#)
+
+Required [output schema](v1/worker-response.json#)
+
+```python
+# Sync calls
+queue.declareWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result`
+queue.declareWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
+# Async call
+await asyncQueue.declareWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result
+await asyncQueue.declareWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
+```
+
+
+
+
+### Exchanges in `taskcluster.QueueEvents`
+```python
+// Create QueueEvents client instance
+import taskcluster
+queueEvents = taskcluster.QueueEvents(options)
+```
+The queue, typically available at `queue.taskcluster.net`, is responsible
+for accepting tasks and track their state as they are executed by
+workers. In order ensure they are eventually resolved.
+
+This document describes AMQP exchanges offered by the queue, which allows
+third-party listeners to monitor tasks as they progress to resolution.
+These exchanges targets the following audience:
+ * Schedulers, who takes action after tasks are completed,
+ * Workers, who wants to listen for new or canceled tasks (optional),
+ * Tools, that wants to update their view as task progress.
+
+You'll notice that all the exchanges in the document shares the same
+routing key pattern. This makes it very easy to bind to all messages
+about a certain kind tasks.
+
+**Task specific routes**, a task can define a task specific route using
+the `task.routes` property. See task creation documentation for details
+on permissions required to provide task specific routes. If a task has
+the entry `'notify.by-email'` in as task specific route defined in
+`task.routes` all messages about this task will be CC'ed with the
+routing-key `'route.notify.by-email'`.
+
+These routes will always be prefixed `route.`, so that cannot interfere
+with the _primary_ routing key as documented here. Notice that the
+_primary_ routing key is always prefixed `primary.`. This is ensured
+in the routing key reference, so API clients will do this automatically.
+
+Please, note that the way RabbitMQ works, the message will only arrive
+in your queue once, even though you may have bound to the exchange with
+multiple routing key patterns that matches more of the CC'ed routing
+routing keys.
+
+**Delivery guarantees**, most operations on the queue are idempotent,
+which means that if repeated with the same arguments then the requests
+will ensure completion of the operation and return the same response.
+This is useful if the server crashes or the TCP connection breaks, but
+when re-executing an idempotent operation, the queue will also resend
+any related AMQP messages. Hence, messages may be repeated.
+
+This shouldn't be much of a problem, as the best you can achieve using
+confirm messages with AMQP is at-least-once delivery semantics. Hence,
+this only prevents you from obtaining at-most-once delivery semantics.
+
+**Remark**, some message generated by timeouts maybe dropped if the
+server crashes at wrong time. Ideally, we'll address this in the
+future. For now we suggest you ignore this corner case, and notify us
+if this corner case is of concern to you.
+#### Task Defined Messages
+ * `queueEvents.taskDefined(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Task Pending Messages
+ * `queueEvents.taskPending(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Task Running Messages
+ * `queueEvents.taskRunning(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` is required  Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` is required  Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Artifact Creation Messages
+ * `queueEvents.artifactCreated(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` is required  Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` is required  Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Task Completed Messages
+ * `queueEvents.taskCompleted(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` is required  Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` is required  Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Task Failed Messages
+ * `queueEvents.taskFailed(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Task Exception Messages
+ * `queueEvents.taskException(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskId` is required  Description: `taskId` for the task this message concerns
+   * `runId` Description: `runId` of latest run for the task, `_` if no run is exists for the task.
+   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
+   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
+   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
+   * `workerType` is required  Description: `workerType` this task must run on.
+   * `schedulerId` is required  Description: `schedulerId` this task was created by.
+   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+#### Task Group Resolved Messages
+ * `queueEvents.taskGroupResolved(routingKeyPattern) -> routingKey`
+   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
+   * `taskGroupId` is required  Description: `taskGroupId` for the task-group this message concerns
+   * `schedulerId` is required  Description: `schedulerId` for the task-group this message concerns
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+
+
+
+### Methods in `taskcluster.Secrets`
+```python
+import asyncio # Only for async 
+// Create Secrets client instance
+import taskcluster
+import taskcluster.aio
+
+secrets = taskcluster.Secrets(options)
+# Below only for async instances, assume already in coroutine
+loop = asyncio.get_event_loop()
+session = taskcluster.aio.createSession(loop=loop)
+asyncSecrets = taskcluster.aio.Secrets(options, session=session)
+```
+The secrets service provides a simple key/value store for small bits of secret
+data.  Access is limited by scopes, so values can be considered secret from
+those who do not have the relevant scopes.
+
+Secrets also have an expiration date, and once a secret has expired it can no
+longer be read.  This is useful for short-term secrets such as a temporary
+service credential or a one-time signing key.
+#### Ping Server
+Respond without doing anything.
+This endpoint is used to check that the service is up.
+
+
+```python
+# Sync calls
+secrets.ping() # -> None`
+# Async call
+await asyncSecrets.ping() # -> None
+```
+
+#### Set Secret
+Set the secret associated with some key.  If the secret already exists, it is
+updated instead.
+
+
+
+Takes the following arguments:
+
+  * `name`
+
+Required [input schema](v1/secret.json#)
+
+```python
+# Sync calls
+secrets.set(name, payload) # -> None`
+secrets.set(payload, name='value') # -> None
+# Async call
+await asyncSecrets.set(name, payload) # -> None
+await asyncSecrets.set(payload, name='value') # -> None
+```
+
+#### Delete Secret
+Delete the secret associated with some key.
+
+
+
+Takes the following arguments:
+
+  * `name`
+
+```python
+# Sync calls
+secrets.remove(name) # -> None`
+secrets.remove(name='value') # -> None
+# Async call
+await asyncSecrets.remove(name) # -> None
+await asyncSecrets.remove(name='value') # -> None
+```
+
+#### Read Secret
+Read the secret associated with some key.  If the secret has recently
+expired, the response code 410 is returned.  If the caller lacks the
+scope necessary to get the secret, the call will fail with a 403 code
+regardless of whether the secret exists.
+
+
+
+Takes the following arguments:
+
+  * `name`
+
+Required [output schema](v1/secret.json#)
+
+```python
+# Sync calls
+secrets.get(name) # -> result`
+secrets.get(name='value') # -> result
+# Async call
+await asyncSecrets.get(name) # -> result
+await asyncSecrets.get(name='value') # -> result
+```
+
+#### List Secrets
+List the names of all secrets.
+
+By default this end-point will try to return up to 1000 secret names in one
+request. But it **may return less**, even if more tasks are available.
+It may also return a `continuationToken` even though there are no more
+results. However, you can only be sure to have seen all results if you
+keep calling `listTaskGroup` with the last `continuationToken` until you
+get a result without a `continuationToken`.
+
+If you are not interested in listing all the members at once, you may
+use the query-string option `limit` to return fewer.
+
+
+Required [output schema](v1/secret-list.json#)
+
+```python
+# Sync calls
+secrets.list() # -> result`
+# Async call
+await asyncSecrets.list() # -> result
+```
+
+
+
+
+### Exchanges in `taskcluster.TreeherderEvents`
+```python
+// Create TreeherderEvents client instance
+import taskcluster
+treeherderEvents = taskcluster.TreeherderEvents(options)
+```
+The taskcluster-treeherder service is responsible for processing
+task events published by TaskCluster Queue and producing job messages
+that are consumable by Treeherder.
+
+This exchange provides that job messages to be consumed by any queue that
+attached to the exchange.  This could be a production Treeheder instance,
+a local development environment, or a custom dashboard.
+#### Job Messages
+ * `treeherderEvents.jobs(routingKeyPattern) -> routingKey`
+   * `destination` is required  Description: destination
+   * `project` is required  Description: project
+   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+
+
+
+<!-- END OF GENERATED DOCS -->
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/setup.cfg
@@ -0,0 +1,8 @@
+[nosetests]
+verbosity = 1
+detailed-errors = 1
+
+[egg_info]
+tag_build = 
+tag_date = 0
+
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/setup.py
@@ -0,0 +1,88 @@
+#!/usr/bin/env python
+
+from setuptools import setup
+from setuptools.command.test import test as TestCommand
+import sys
+
+# The VERSION variable is automagically changed
+# by release.sh.  Make sure you understand how
+# that script works if you want to change this
+VERSION = '4.0.1'
+
+tests_require = [
+    'nose==1.3.7',
+    'nose-exclude==0.5.0',
+    'httmock==1.2.6',
+    'rednose==1.2.1',
+    'mock==1.0.1',
+    'setuptools-lint==0.3',
+    'flake8==2.5.0',
+    'psutil==2.1.3',
+    'hypothesis==3.6.1',
+    'tox==2.3.2',
+    'coverage==4.1b2',
+    'python-dateutil==2.6.0',
+]
+
+# requests has a policy of not breaking apis between major versions
+# http://docs.python-requests.org/en/latest/community/release-process/
+install_requires = [
+    'requests>=2.4.3,<3',
+    'mohawk>=0.3.4,<0.4',
+    'slugid>=1.0.7,<2',
+    'six>=1.10.0,<2',
+]
+
+# from http://testrun.org/tox/latest/example/basic.html
+class Tox(TestCommand):
+    user_options = [('tox-args=', 'a', "Arguments to pass to tox")]
+
+    def initialize_options(self):
+        TestCommand.initialize_options(self)
+        self.tox_args = None
+
+    def finalize_options(self):
+        TestCommand.finalize_options(self)
+        self.test_args = []
+        self.test_suite = True
+
+    def run_tests(self):
+        # import here, cause outside the eggs aren't loaded
+        import tox
+        import shlex
+        args = self.tox_args
+        if args:
+            args = shlex.split(self.tox_args)
+        errno = tox.cmdline(args=args)
+        sys.exit(errno)
+
+if sys.version_info.major == 2:
+    tests_require.extend([
+        'subprocess32==3.2.6',
+    ])
+elif sys.version_info[:2] < (3, 5):
+    raise Exception('This library does not support Python 3 versions below 3.5')
+elif sys.version_info[:2] >= (3, 5):
+    install_requires.extend([
+        'aiohttp>=2.0.0,<4',
+        'async_timeout>=2.0.0,<4',
+    ])
+
+if __name__ == '__main__':
+    setup(
+        name='taskcluster',
+        version=VERSION,
+        description='Python client for Taskcluster',
+        author='John Ford',
+        author_email='jhford@mozilla.com',
+        url='https://github.com/taskcluster/taskcluster-client.py',
+        packages=['taskcluster', 'taskcluster.aio'],
+        install_requires=install_requires,
+        test_suite="nose.collector",
+        tests_require=tests_require,
+        cmdclass={'test': Tox},
+        zip_safe=False,
+        classifiers=['Programming Language :: Python :: 2.7',
+                     'Programming Language :: Python :: 3.5',
+                     'Programming Language :: Python :: 3.6'],
+    )
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/__init__.py
@@ -0,0 +1,17 @@
+""" Python client for Taskcluster """
+from __future__ import absolute_import, division, print_function, unicode_literals
+
+import logging
+import os
+from .client import createSession  # NOQA
+from taskcluster.utils import *  # NOQA
+from taskcluster.exceptions import *  # NOQA
+from taskcluster._client_importer import *  # NOQA
+
+log = logging.getLogger(__name__)
+
+if os.environ.get('DEBUG_TASKCLUSTER_CLIENT'):
+    log.setLevel(logging.DEBUG)
+    if len(log.handlers) == 0:
+        log.addHandler(logging.StreamHandler())
+log.addHandler(logging.NullHandler())
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/_client_importer.py
@@ -0,0 +1,17 @@
+from .auth import Auth  # NOQA
+from .authevents import AuthEvents  # NOQA
+from .awsprovisioner import AwsProvisioner  # NOQA
+from .awsprovisionerevents import AwsProvisionerEvents  # NOQA
+from .ec2manager import EC2Manager  # NOQA
+from .github import Github  # NOQA
+from .githubevents import GithubEvents  # NOQA
+from .hooks import Hooks  # NOQA
+from .index import Index  # NOQA
+from .login import Login  # NOQA
+from .notify import Notify  # NOQA
+from .purgecache import PurgeCache  # NOQA
+from .purgecacheevents import PurgeCacheEvents  # NOQA
+from .queue import Queue  # NOQA
+from .queueevents import QueueEvents  # NOQA
+from .secrets import Secrets  # NOQA
+from .treeherderevents import TreeherderEvents  # NOQA
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/__init__.py
@@ -0,0 +1,16 @@
+""" Python client for Taskcluster """
+
+import logging
+import os
+from .asyncclient import createSession  # NOQA
+from taskcluster.utils import *  # NOQA
+from taskcluster.exceptions import *  # NOQA
+from ._client_importer import *  # NOQA
+
+log = logging.getLogger(__name__)
+
+if os.environ.get('DEBUG_TASKCLUSTER_CLIENT'):
+    log.setLevel(logging.DEBUG)
+    if len(log.handlers) == 0:
+        log.addHandler(logging.StreamHandler())
+log.addHandler(logging.NullHandler())
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/_client_importer.py
@@ -0,0 +1,17 @@
+from .auth import Auth  # NOQA
+from .authevents import AuthEvents  # NOQA
+from .awsprovisioner import AwsProvisioner  # NOQA
+from .awsprovisionerevents import AwsProvisionerEvents  # NOQA
+from .ec2manager import EC2Manager  # NOQA
+from .github import Github  # NOQA
+from .githubevents import GithubEvents  # NOQA
+from .hooks import Hooks  # NOQA
+from .index import Index  # NOQA
+from .login import Login  # NOQA
+from .notify import Notify  # NOQA
+from .purgecache import PurgeCache  # NOQA
+from .purgecacheevents import PurgeCacheEvents  # NOQA
+from .queue import Queue  # NOQA
+from .queueevents import QueueEvents  # NOQA
+from .secrets import Secrets  # NOQA
+from .treeherderevents import TreeherderEvents  # NOQA
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/asyncclient.py
@@ -0,0 +1,388 @@
+"""This module is used to interact with taskcluster rest apis"""
+
+from __future__ import absolute_import, division, print_function
+
+import os
+import logging
+import hashlib
+import hmac
+import datetime
+import calendar
+import six
+from six.moves import urllib
+
+import mohawk
+import mohawk.bewit
+import aiohttp
+import asyncio
+
+from .. import exceptions
+from .. import utils
+from ..client import BaseClient
+from . import asyncutils
+
+log = logging.getLogger(__name__)
+
+
+# Default configuration
+_defaultConfig = config = {
+    'credentials': {
+        'clientId': os.environ.get('TASKCLUSTER_CLIENT_ID'),
+        'accessToken': os.environ.get('TASKCLUSTER_ACCESS_TOKEN'),
+        'certificate': os.environ.get('TASKCLUSTER_CERTIFICATE'),
+    },
+    'maxRetries': 5,
+    'signedUrlExpiration': 15 * 60,
+}
+
+
+def createSession(*args, **kwargs):
+    """ Create a new aiohttp session.  This passes through all positional and
+    keyword arguments to the asyncutils.createSession() constructor.
+
+    It's preferred to do something like
+
+        async with createSession(...) as session:
+            queue = Queue(session=session)
+            await queue.ping()
+
+    or
+
+        async with createSession(...) as session:
+            async with Queue(session=session) as queue:
+                await queue.ping()
+
+    in the client code.
+    """
+    return asyncutils.createSession(*args, **kwargs)
+
+
+class AsyncBaseClient(BaseClient):
+    """ Base Class for API Client Classes. Each individual Client class
+    needs to set up its own methods for REST endpoints and Topic Exchange
+    routing key patterns.  The _makeApiCall() and _topicExchange() methods
+    help with this.
+    """
+
+    def __init__(self, *args, **kwargs):
+        super(AsyncBaseClient, self).__init__(*args, **kwargs)
+        self._implicitSession = False
+        if self.session is None:
+            self._implicitSession = True
+
+    def _createSession(self):
+        """ If self.session isn't set, don't create an implicit.
+
+        To avoid `session.close()` warnings at the end of tasks, and
+        various strongly-worded aiohttp warnings about using `async with`,
+        let's set `self.session` to `None` if no session is passed in to
+        `__init__`. The `asyncutils` functions will create a new session
+        per call in that case.
+        """
+        return None
+
+    async def _makeApiCall(self, entry, *args, **kwargs):
+        """ This function is used to dispatch calls to other functions
+        for a given API Reference entry"""
+
+        x = self._processArgs(entry, *args, **kwargs)
+        routeParams, payload, query, paginationHandler, paginationLimit = x
+        route = self._subArgsInRoute(entry, routeParams)
+
+        # TODO: Check for limit being in the Query of the api ref
+        if paginationLimit and 'limit' in entry.get('query', []):
+            query['limit'] = paginationLimit
+
+        if query:
+            _route = route + '?' + urllib.parse.urlencode(query)
+        else:
+            _route = route
+        response = await self._makeHttpRequest(entry['method'], _route, payload)
+
+        if paginationHandler:
+            paginationHandler(response)
+            while response.get('continuationToken'):
+                query['continuationToken'] = response['continuationToken']
+                _route = route + '?' + urllib.parse.urlencode(query)
+                response = await self._makeHttpRequest(entry['method'], _route, payload)
+                paginationHandler(response)
+        else:
+            return response
+
+    async def _makeHttpRequest(self, method, route, payload):
+        """ Make an HTTP Request for the API endpoint.  This method wraps
+        the logic about doing failure retry and passes off the actual work
+        of doing an HTTP request to another method."""
+
+        url = self._joinBaseUrlAndRoute(route)
+        log.debug('Full URL used is: %s', url)
+
+        hawkExt = self.makeHawkExt()
+
+        # Serialize payload if given
+        if payload is not None:
+            payload = utils.dumpJson(payload)
+
+        # Do a loop of retries
+        retry = -1  # we plus first in the loop, and attempt 1 is retry 0
+        retries = self.options['maxRetries']
+        while retry < retries:
+            retry += 1
+            # if this isn't the first retry then we sleep
+            if retry > 0:
+                snooze = float(retry * retry) / 10.0
+                log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
+                await asyncio.sleep(utils.calculateSleepTime(retry))
+            # Construct header
+            if self._hasCredentials():
+                sender = mohawk.Sender(
+                    credentials={
+                        'id': self.options['credentials']['clientId'],
+                        'key': self.options['credentials']['accessToken'],
+                        'algorithm': 'sha256',
+                    },
+                    ext=hawkExt if hawkExt else {},
+                    url=url,
+                    content=payload if payload else '',
+                    content_type='application/json' if payload else '',
+                    method=method,
+                )
+
+                headers = {'Authorization': sender.request_header}
+            else:
+                log.debug('Not using hawk!')
+                headers = {}
+            if payload:
+                # Set header for JSON if payload is given, note that we serialize
+                # outside this loop.
+                headers['Content-Type'] = 'application/json'
+
+            log.debug('Making attempt %d', retry)
+            try:
+                response = await asyncutils.makeSingleHttpRequest(
+                    method, url, payload, headers, session=self.session
+                )
+            except aiohttp.ClientError as rerr:
+                if retry < retries:
+                    log.warn('Retrying because of: %s' % rerr)
+                    continue
+                # raise a connection exception
+                raise exceptions.TaskclusterConnectionError(
+                    "Failed to establish connection",
+                    superExc=rerr
+                )
+
+            status = response.status
+            if status == 204:
+                return None
+
+            # Catch retryable errors and go to the beginning of the loop
+            # to do the retry
+            if 500 <= status and status < 600 and retry < retries:
+                log.warn('Retrying because of a %s status code' % status)
+                continue
+
+            # Throw errors for non-retryable errors
+            if status < 200 or status >= 300:
+                # Parse messages from errors
+                data = {}
+                try:
+                    data = await response.json()
+                except:
+                    pass  # Ignore JSON errors in error messages
+                # Find error message
+                message = "Unknown Server Error"
+                if isinstance(data, dict):
+                    message = data.get('message')
+                else:
+                    if status == 401:
+                        message = "Authentication Error"
+                    elif status == 500:
+                        message = "Internal Server Error"
+                    else:
+                        message = "Unknown Server Error %s\n%s" % (str(status), str(data)[:1024])
+                # Raise TaskclusterAuthFailure if this is an auth issue
+                if status == 401:
+                    raise exceptions.TaskclusterAuthFailure(
+                        message,
+                        status_code=status,
+                        body=data,
+                        superExc=None
+                    )
+                # Raise TaskclusterRestFailure for all other issues
+                raise exceptions.TaskclusterRestFailure(
+                    message,
+                    status_code=status,
+                    body=data,
+                    superExc=None
+                )
+
+            # Try to load JSON
+            try:
+                await response.release()
+                return await response.json()
+            except ValueError:
+                return {"response": response}
+
+        # This code-path should be unreachable
+        assert False, "Error from last retry should have been raised!"
+
+    async def __aenter__(self):
+        if self._implicitSession and not self.session:
+            self.session = createSession()
+        return self
+
+    async def __aexit__(self, *args):
+        if self._implicitSession and self.session:
+            await self.session.close()
+            self.session = None
+
+
+def createApiClient(name, api):
+    attributes = dict(
+        name=name,
+        __doc__=api.get('description'),
+        classOptions={},
+        funcinfo={},
+    )
+
+    copiedOptions = ('baseUrl', 'exchangePrefix')
+    for opt in copiedOptions:
+        if opt in api['reference']:
+            attributes['classOptions'][opt] = api['reference'][opt]
+
+    for entry in api['reference']['entries']:
+        if entry['type'] == 'function':
+            def addApiCall(e):
+                async def apiCall(self, *args, **kwargs):
+                    return await self._makeApiCall(e, *args, **kwargs)
+                return apiCall
+            f = addApiCall(entry)
+
+            docStr = "Call the %s api's %s method.  " % (name, entry['name'])
+
+            if entry['args'] and len(entry['args']) > 0:
+                docStr += "This method takes:\n\n"
+                docStr += '\n'.join(['- ``%s``' % x for x in entry['args']])
+                docStr += '\n\n'
+            else:
+                docStr += "This method takes no arguments.  "
+
+            if 'input' in entry:
+                docStr += "This method takes input ``%s``.  " % entry['input']
+
+            if 'output' in entry:
+                docStr += "This method gives output ``%s``" % entry['output']
+
+            docStr += '\n\nThis method does a ``%s`` to ``%s``.' % (
+                entry['method'].upper(), entry['route'])
+
+            f.__doc__ = docStr
+            attributes['funcinfo'][entry['name']] = entry
+
+        elif entry['type'] == 'topic-exchange':
+            def addTopicExchange(e):
+                def topicExchange(self, *args, **kwargs):
+                    return self._makeTopicExchange(e, *args, **kwargs)
+                return topicExchange
+
+            f = addTopicExchange(entry)
+
+            docStr = 'Generate a routing key pattern for the %s exchange.  ' % entry['exchange']
+            docStr += 'This method takes a given routing key as a string or a '
+            docStr += 'dictionary.  For each given dictionary key, the corresponding '
+            docStr += 'routing key token takes its value.  For routing key tokens '
+            docStr += 'which are not specified by the dictionary, the * or # character '
+            docStr += 'is used depending on whether or not the key allows multiple words.\n\n'
+            docStr += 'This exchange takes the following keys:\n\n'
+            docStr += '\n'.join(['- ``%s``' % x['name'] for x in entry['routingKey']])
+
+            f.__doc__ = docStr
+
+        # Add whichever function we created
+        f.__name__ = str(entry['name'])
+        attributes[entry['name']] = f
+
+    return type(utils.toStr(name), (BaseClient,), attributes)
+
+
+def createTemporaryCredentials(clientId, accessToken, start, expiry, scopes, name=None):
+    """ Create a set of temporary credentials
+
+    Callers should not apply any clock skew; clock drift is accounted for by
+    auth service.
+
+    clientId: the issuing clientId
+    accessToken: the issuer's accessToken
+    start: start time of credentials, seconds since epoch
+    expiry: expiration time of credentials, seconds since epoch
+    scopes: list of scopes granted
+    name: credential name (optional)
+
+    Returns a dictionary in the form:
+        { 'clientId': str, 'accessToken: str, 'certificate': str}
+    """
+
+    now = datetime.datetime.utcnow()
+    now = now - datetime.timedelta(minutes=10)  # Subtract 5 minutes for clock drift
+
+    for scope in scopes:
+        if not isinstance(scope, six.string_types):
+            raise exceptions.TaskclusterFailure('Scope must be string')
+
+    # Credentials can only be valid for 31 days.  I hope that
+    # this is validated on the server somehow...
+
+    if expiry - start > datetime.timedelta(days=31):
+        raise exceptions.TaskclusterFailure('Only 31 days allowed')
+
+    # We multiply times by 1000 because the auth service is JS and as a result
+    # uses milliseconds instead of seconds
+    cert = dict(
+        version=1,
+        scopes=scopes,
+        start=calendar.timegm(start.utctimetuple()) * 1000,
+        expiry=calendar.timegm(expiry.utctimetuple()) * 1000,
+        seed=utils.slugId() + utils.slugId(),
+    )
+
+    # if this is a named temporary credential, include the issuer in the certificate
+    if name:
+        cert['issuer'] = utils.toStr(clientId)
+
+    sig = ['version:' + utils.toStr(cert['version'])]
+    if name:
+        sig.extend([
+            'clientId:' + utils.toStr(name),
+            'issuer:' + utils.toStr(clientId),
+        ])
+    sig.extend([
+        'seed:' + utils.toStr(cert['seed']),
+        'start:' + utils.toStr(cert['start']),
+        'expiry:' + utils.toStr(cert['expiry']),
+        'scopes:'
+    ] + scopes)
+    sigStr = '\n'.join(sig).encode()
+
+    if isinstance(accessToken, six.text_type):
+        accessToken = accessToken.encode()
+    sig = hmac.new(accessToken, sigStr, hashlib.sha256).digest()
+
+    cert['signature'] = utils.encodeStringForB64Header(sig)
+
+    newToken = hmac.new(accessToken, cert['seed'], hashlib.sha256).digest()
+    newToken = utils.makeB64UrlSafe(utils.encodeStringForB64Header(newToken)).replace(b'=', b'')
+
+    return {
+        'clientId': name or clientId,
+        'accessToken': newToken,
+        'certificate': utils.dumpJson(cert),
+    }
+
+
+__all__ = [
+    'createTemporaryCredentials',
+    'config',
+    'BaseClient',
+    'createApiClient',
+]
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/asyncutils.py
@@ -0,0 +1,116 @@
+from __future__ import absolute_import, division, print_function
+import aiohttp
+import aiohttp.hdrs
+import asyncio
+import async_timeout
+import logging
+import os
+import six
+
+import taskcluster.utils as utils
+import taskcluster.exceptions as exceptions
+
+log = logging.getLogger(__name__)
+
+
+def createSession(*args, **kwargs):
+    return aiohttp.ClientSession(*args, **kwargs)
+
+
+# Useful information: https://www.blog.pythonlibrary.org/2016/07/26/python-3-an-intro-to-asyncio/
+async def makeHttpRequest(method, url, payload, headers, retries=utils.MAX_RETRIES, session=None):
+    """ Make an HTTP request and retry it until success, return request """
+    retry = -1
+    response = None
+    implicit = False
+    if session is None:
+        implicit = True
+        session = aiohttp.ClientSession()
+
+    def cleanup():
+        if implicit:
+            loop = asyncio.get_event_loop()
+            loop.run_until_complete(session.close())
+
+    try:
+        while True:
+            retry += 1
+            # if this isn't the first retry then we sleep
+            if retry > 0:
+                snooze = float(retry * retry) / 10.0
+                log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
+                await asyncio.sleep(snooze)
+
+            # Seek payload to start, if it is a file
+            if hasattr(payload, 'seek'):
+                payload.seek(0)
+
+            log.debug('Making attempt %d', retry)
+            try:
+                with async_timeout.timeout(60):
+                    response = await makeSingleHttpRequest(method, url, payload, headers, session)
+            except aiohttp.ClientError as rerr:
+                if retry < retries:
+                    log.warn('Retrying because of: %s' % rerr)
+                    continue
+                # raise a connection exception
+                raise rerr
+            except ValueError as rerr:
+                log.warn('ValueError from aiohttp: redirect to non-http or https')
+                raise rerr
+            except RuntimeError as rerr:
+                log.warn('RuntimeError from aiohttp: session closed')
+                raise rerr
+            # Handle non 2xx status code and retry if possible
+            status = response.status
+            if 500 <= status and status < 600 and retry < retries:
+                if retry < retries:
+                    log.warn('Retrying because of: %d status' % status)
+                    continue
+                else:
+                    raise exceptions.TaskclusterRestFailure("Unknown Server Error", superExc=None)
+            return response
+    finally:
+        cleanup()
+    # This code-path should be unreachable
+    assert False, "Error from last retry should have been raised!"
+
+
+async def makeSingleHttpRequest(method, url, payload, headers, session=None):
+    method = method.upper()
+    log.debug('Making a %s request to %s', method, url)
+    log.debug('HTTP Headers: %s' % str(headers))
+    log.debug('HTTP Payload: %s (limit 100 char)' % str(payload)[:100])
+    implicit = False
+    if session is None:
+        implicit = True
+        session = aiohttp.ClientSession()
+
+    skip_auto_headers = [aiohttp.hdrs.CONTENT_TYPE]
+
+    try:
+        # https://docs.aiohttp.org/en/stable/client_quickstart.html#passing-parameters-in-urls
+        # we must avoid aiohttp's helpful "requoting" functionality, as it breaks Hawk signatures
+        url = aiohttp.client.URL(url, encoded=True)
+        async with session.request(
+            method, url, data=payload, headers=headers,
+            skip_auto_headers=skip_auto_headers, compress=False
+        ) as resp:
+            response_text = await resp.text()
+            log.debug('Received HTTP Status:    %s' % resp.status)
+            log.debug('Received HTTP Headers: %s' % str(resp.headers))
+            log.debug('Received HTTP Payload: %s (limit 1024 char)' %
+                      six.text_type(response_text)[:1024])
+            return resp
+    finally:
+        if implicit:
+            await session.close()
+
+
+async def putFile(filename, url, contentType, session=None):
+    with open(filename, 'rb') as f:
+        contentLength = os.fstat(f.fileno()).st_size
+        return await makeHttpRequest('put', url, f, headers={
+            'Content-Length': contentLength,
+            'Content-Type': contentType,
+        }, session=session)
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/auth.py
@@ -0,0 +1,866 @@
+# coding=utf-8
+#####################################################
+# THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT #
+#####################################################
+# noqa: E128,E201
+from .asyncclient import AsyncBaseClient
+from .asyncclient import createApiClient
+from .asyncclient import config
+from .asyncclient import createTemporaryCredentials
+from .asyncclient import createSession
+_defaultConfig = config
+
+
+class Auth(AsyncBaseClient):
+    """
+    Authentication related API end-points for Taskcluster and related
+    services. These API end-points are of interest if you wish to:
+      * Authorize a request signed with Taskcluster credentials,
+      * Manage clients and roles,
+      * Inspect or audit clients and roles,
+      * Gain access to various services guarded by this API.
+
+    Note that in this service "authentication" refers to validating the
+    correctness of the supplied credentials (that the caller posesses the
+    appropriate access token). This service does not provide any kind of user
+    authentication (identifying a particular person).
+
+    ### Clients
+    The authentication service manages _clients_, at a high-level each client
+    consists of a `clientId`, an `accessToken`, scopes, and some metadata.
+    The `clientId` and `accessToken` can be used for authentication when
+    calling Taskcluster APIs.
+
+    The client's scopes control the client's access to Taskcluster resources.
+    The scopes are *expanded* by substituting roles, as defined below.
+
+    ### Roles
+    A _role_ consists of a `roleId`, a set of scopes and a description.
+    Each role constitutes a simple _expansion rule_ that says if you have
+    the scope: `assume:<roleId>` you get the set of scopes the role has.
+    Think of the `assume:<roleId>` as a scope that allows a client to assume
+    a role.
+
+    As in scopes the `*` kleene star also have special meaning if it is
+    located at the end of a `roleId`. If you have a role with the following
+    `roleId`: `my-prefix*`, then any client which has a scope staring with
+    `assume:my-prefix` will be allowed to assume the role.
+
+    ### Guarded Services
+    The authentication service also has API end-points for delegating access
+    to some guarded service such as AWS S3, or Azure Table Storage.
+    Generally, we add API end-points to this server when we wish to use
+    Taskcluster credentials to grant access to a third-party service used
+    by many Taskcluster components.
+    """
+
+    classOptions = {
+        "baseUrl": "https://auth.taskcluster.net/v1/"
+    }
+
+    async def ping(self, *args, **kwargs):
+        """
+        Ping Server
+
+        Respond without doing anything.
+        This endpoint is used to check that the service is up.
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["ping"], *args, **kwargs)
+
+    async def listClients(self, *args, **kwargs):
+        """
+        List Clients
+
+        Get a list of all clients.  With `prefix`, only clients for which
+        it is a prefix of the clientId are returned.
+
+        By default this end-point will try to return up to 1000 clients in one
+        request. But it **may return less, even none**.
+        It may also return a `continuationToken` even though there are no more
+        results. However, you can only be sure to have seen all results if you
+        keep calling `listClients` with the last `continuationToken` until you
+        get a result without a `continuationToken`.
+
+        This method gives output: ``v1/list-clients-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["listClients"], *args, **kwargs)
+
+    async def client(self, *args, **kwargs):
+        """
+        Get Client
+
+        Get information about a single client.
+
+        This method gives output: ``v1/get-client-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["client"], *args, **kwargs)
+
+    async def createClient(self, *args, **kwargs):
+        """
+        Create Client
+
+        Create a new client and get the `accessToken` for this client.
+        You should store the `accessToken` from this API call as there is no
+        other way to retrieve it.
+
+        If you loose the `accessToken` you can call `resetAccessToken` to reset
+        it, and a new `accessToken` will be returned, but you cannot retrieve the
+        current `accessToken`.
+
+        If a client with the same `clientId` already exists this operation will
+        fail. Use `updateClient` if you wish to update an existing client.
+
+        The caller's scopes must satisfy `scopes`.
+
+        This method takes input: ``v1/create-client-request.json#``
+
+        This method gives output: ``v1/create-client-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["createClient"], *args, **kwargs)
+
+    async def resetAccessToken(self, *args, **kwargs):
+        """
+        Reset `accessToken`
+
+        Reset a clients `accessToken`, this will revoke the existing
+        `accessToken`, generate a new `accessToken` and return it from this
+        call.
+
+        There is no way to retrieve an existing `accessToken`, so if you loose it
+        you must reset the accessToken to acquire it again.
+
+        This method gives output: ``v1/create-client-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["resetAccessToken"], *args, **kwargs)
+
+    async def updateClient(self, *args, **kwargs):
+        """
+        Update Client
+
+        Update an exisiting client. The `clientId` and `accessToken` cannot be
+        updated, but `scopes` can be modified.  The caller's scopes must
+        satisfy all scopes being added to the client in the update operation.
+        If no scopes are given in the request, the client's scopes remain
+        unchanged
+
+        This method takes input: ``v1/create-client-request.json#``
+
+        This method gives output: ``v1/get-client-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["updateClient"], *args, **kwargs)
+
+    async def enableClient(self, *args, **kwargs):
+        """
+        Enable Client
+
+        Enable a client that was disabled with `disableClient`.  If the client
+        is already enabled, this does nothing.
+
+        This is typically used by identity providers to re-enable clients that
+        had been disabled when the corresponding identity's scopes changed.
+
+        This method gives output: ``v1/get-client-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["enableClient"], *args, **kwargs)
+
+    async def disableClient(self, *args, **kwargs):
+        """
+        Disable Client
+
+        Disable a client.  If the client is already disabled, this does nothing.
+
+        This is typically used by identity providers to disable clients when the
+        corresponding identity's scopes no longer satisfy the client's scopes.
+
+        This method gives output: ``v1/get-client-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["disableClient"], *args, **kwargs)
+
+    async def deleteClient(self, *args, **kwargs):
+        """
+        Delete Client
+
+        Delete a client, please note that any roles related to this client must
+        be deleted independently.
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["deleteClient"], *args, **kwargs)
+
+    async def listRoles(self, *args, **kwargs):
+        """
+        List Roles
+
+        Get a list of all roles, each role object also includes the list of
+        scopes it expands to.
+
+        This method gives output: ``v1/list-roles-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["listRoles"], *args, **kwargs)
+
+    async def role(self, *args, **kwargs):
+        """
+        Get Role
+
+        Get information about a single role, including the set of scopes that the
+        role expands to.
+
+        This method gives output: ``v1/get-role-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["role"], *args, **kwargs)
+
+    async def createRole(self, *args, **kwargs):
+        """
+        Create Role
+
+        Create a new role.
+
+        The caller's scopes must satisfy the new role's scopes.
+
+        If there already exists a role with the same `roleId` this operation
+        will fail. Use `updateRole` to modify an existing role.
+
+        Creation of a role that will generate an infinite expansion will result
+        in an error response.
+
+        This method takes input: ``v1/create-role-request.json#``
+
+        This method gives output: ``v1/get-role-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["createRole"], *args, **kwargs)
+
+    async def updateRole(self, *args, **kwargs):
+        """
+        Update Role
+
+        Update an existing role.
+
+        The caller's scopes must satisfy all of the new scopes being added, but
+        need not satisfy all of the client's existing scopes.
+
+        An update of a role that will generate an infinite expansion will result
+        in an error response.
+
+        This method takes input: ``v1/create-role-request.json#``
+
+        This method gives output: ``v1/get-role-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["updateRole"], *args, **kwargs)
+
+    async def deleteRole(self, *args, **kwargs):
+        """
+        Delete Role
+
+        Delete a role. This operation will succeed regardless of whether or not
+        the role exists.
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["deleteRole"], *args, **kwargs)
+
+    async def expandScopesGet(self, *args, **kwargs):
+        """
+        Expand Scopes
+
+        Return an expanded copy of the given scopeset, with scopes implied by any
+        roles included.
+
+        This call uses the GET method with an HTTP body.  It remains only for
+        backward compatibility.
+
+        This method takes input: ``v1/scopeset.json#``
+
+        This method gives output: ``v1/scopeset.json#``
+
+        This method is ``deprecated``
+        """
+
+        return await self._makeApiCall(self.funcinfo["expandScopesGet"], *args, **kwargs)
+
+    async def expandScopes(self, *args, **kwargs):
+        """
+        Expand Scopes
+
+        Return an expanded copy of the given scopeset, with scopes implied by any
+        roles included.
+
+        This method takes input: ``v1/scopeset.json#``
+
+        This method gives output: ``v1/scopeset.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["expandScopes"], *args, **kwargs)
+
+    async def currentScopes(self, *args, **kwargs):
+        """
+        Get Current Scopes
+
+        Return the expanded scopes available in the request, taking into account all sources
+        of scopes and scope restrictions (temporary credentials, assumeScopes, client scopes,
+        and roles).
+
+        This method gives output: ``v1/scopeset.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["currentScopes"], *args, **kwargs)
+
+    async def awsS3Credentials(self, *args, **kwargs):
+        """
+        Get Temporary Read/Write Credentials S3
+
+        Get temporary AWS credentials for `read-write` or `read-only` access to
+        a given `bucket` and `prefix` within that bucket.
+        The `level` parameter can be `read-write` or `read-only` and determines
+        which type of credentials are returned. Please note that the `level`
+        parameter is required in the scope guarding access.  The bucket name must
+        not contain `.`, as recommended by Amazon.
+
+        This method can only allow access to a whitelisted set of buckets.  To add
+        a bucket to that whitelist, contact the Taskcluster team, who will add it to
+        the appropriate IAM policy.  If the bucket is in a different AWS account, you
+        will also need to add a bucket policy allowing access from the Taskcluster
+        account.  That policy should look like this:
+
+        ```
+        {
+          "Version": "2012-10-17",
+          "Statement": [
+            {
+              "Sid": "allow-taskcluster-auth-to-delegate-access",
+              "Effect": "Allow",
+              "Principal": {
+                "AWS": "arn:aws:iam::692406183521:root"
+              },
+              "Action": [
+                "s3:ListBucket",
+                "s3:GetObject",
+                "s3:PutObject",
+                "s3:DeleteObject",
+                "s3:GetBucketLocation"
+              ],
+              "Resource": [
+                "arn:aws:s3:::<bucket>",
+                "arn:aws:s3:::<bucket>/*"
+              ]
+            }
+          ]
+        }
+        ```
+
+        The credentials are set to expire after an hour, but this behavior is
+        subject to change. Hence, you should always read the `expires` property
+        from the response, if you intend to maintain active credentials in your
+        application.
+
+        Please note that your `prefix` may not start with slash `/`. Such a prefix
+        is allowed on S3, but we forbid it here to discourage bad behavior.
+
+        Also note that if your `prefix` doesn't end in a slash `/`, the STS
+        credentials may allow access to unexpected keys, as S3 does not treat
+        slashes specially.  For example, a prefix of `my-folder` will allow
+        access to `my-folder/file.txt` as expected, but also to `my-folder.txt`,
+        which may not be intended.
+
+        Finally, note that the `PutObjectAcl` call is not allowed.  Passing a canned
+        ACL other than `private` to `PutObject` is treated as a `PutObjectAcl` call, and
+        will result in an access-denied error from AWS.  This limitation is due to a
+        security flaw in Amazon S3 which might otherwise allow indefinite access to
+        uploaded objects.
+
+        **EC2 metadata compatibility**, if the querystring parameter
+        `?format=iam-role-compat` is given, the response will be compatible
+        with the JSON exposed by the EC2 metadata service. This aims to ease
+        compatibility for libraries and tools built to auto-refresh credentials.
+        For details on the format returned by EC2 metadata service see:
+        [EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials).
+
+        This method gives output: ``v1/aws-s3-credentials-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["awsS3Credentials"], *args, **kwargs)
+
+    async def azureAccounts(self, *args, **kwargs):
+        """
+        List Accounts Managed by Auth
+
+        Retrieve a list of all Azure accounts managed by Taskcluster Auth.
+
+        This method gives output: ``v1/azure-account-list-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["azureAccounts"], *args, **kwargs)
+
+    async def azureTables(self, *args, **kwargs):
+        """
+        List Tables in an Account Managed by Auth
+
+        Retrieve a list of all tables in an account.
+
+        This method gives output: ``v1/azure-table-list-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["azureTables"], *args, **kwargs)
+
+    async def azureTableSAS(self, *args, **kwargs):
+        """
+        Get Shared-Access-Signature for Azure Table
+
+        Get a shared access signature (SAS) string for use with a specific Azure
+        Table Storage table.
+
+        The `level` parameter can be `read-write` or `read-only` and determines
+        which type of credentials are returned.  If level is read-write, it will create the
+        table if it doesn't already exist.
+
+        This method gives output: ``v1/azure-table-access-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["azureTableSAS"], *args, **kwargs)
+
+    async def azureContainers(self, *args, **kwargs):
+        """
+        List containers in an Account Managed by Auth
+
+        Retrieve a list of all containers in an account.
+
+        This method gives output: ``v1/azure-container-list-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["azureContainers"], *args, **kwargs)
+
+    async def azureContainerSAS(self, *args, **kwargs):
+        """
+        Get Shared-Access-Signature for Azure Container
+
+        Get a shared access signature (SAS) string for use with a specific Azure
+        Blob Storage container.
+
+        The `level` parameter can be `read-write` or `read-only` and determines
+        which type of credentials are returned.  If level is read-write, it will create the
+        container if it doesn't already exist.
+
+        This method gives output: ``v1/azure-container-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["azureContainerSAS"], *args, **kwargs)
+
+    async def sentryDSN(self, *args, **kwargs):
+        """
+        Get DSN for Sentry Project
+
+        Get temporary DSN (access credentials) for a sentry project.
+        The credentials returned can be used with any Sentry client for up to
+        24 hours, after which the credentials will be automatically disabled.
+
+        If the project doesn't exist it will be created, and assigned to the
+        initial team configured for this component. Contact a Sentry admin
+        to have the project transferred to a team you have access to if needed
+
+        This method gives output: ``v1/sentry-dsn-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["sentryDSN"], *args, **kwargs)
+
+    async def statsumToken(self, *args, **kwargs):
+        """
+        Get Token for Statsum Project
+
+        Get temporary `token` and `baseUrl` for sending metrics to statsum.
+
+        The token is valid for 24 hours, clients should refresh after expiration.
+
+        This method gives output: ``v1/statsum-token-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["statsumToken"], *args, **kwargs)
+
+    async def webhooktunnelToken(self, *args, **kwargs):
+        """
+        Get Token for Webhooktunnel Proxy
+
+        Get temporary `token` and `id` for connecting to webhooktunnel
+        The token is valid for 96 hours, clients should refresh after expiration.
+
+        This method gives output: ``v1/webhooktunnel-token-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["webhooktunnelToken"], *args, **kwargs)
+
+    async def authenticateHawk(self, *args, **kwargs):
+        """
+        Authenticate Hawk Request
+
+        Validate the request signature given on input and return list of scopes
+        that the authenticating client has.
+
+        This method is used by other services that wish rely on Taskcluster
+        credentials for authentication. This way we can use Hawk without having
+        the secret credentials leave this service.
+
+        This method takes input: ``v1/authenticate-hawk-request.json#``
+
+        This method gives output: ``v1/authenticate-hawk-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["authenticateHawk"], *args, **kwargs)
+
+    async def testAuthenticate(self, *args, **kwargs):
+        """
+        Test Authentication
+
+        Utility method to test client implementations of Taskcluster
+        authentication.
+
+        Rather than using real credentials, this endpoint accepts requests with
+        clientId `tester` and accessToken `no-secret`. That client's scopes are
+        based on `clientScopes` in the request body.
+
+        The request is validated, with any certificate, authorizedScopes, etc.
+        applied, and the resulting scopes are checked against `requiredScopes`
+        from the request body. On success, the response contains the clientId
+        and scopes as seen by the API method.
+
+        This method takes input: ``v1/test-authenticate-request.json#``
+
+        This method gives output: ``v1/test-authenticate-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["testAuthenticate"], *args, **kwargs)
+
+    async def testAuthenticateGet(self, *args, **kwargs):
+        """
+        Test Authentication (GET)
+
+        Utility method similar to `testAuthenticate`, but with the GET method,
+        so it can be used with signed URLs (bewits).
+
+        Rather than using real credentials, this endpoint accepts requests with
+        clientId `tester` and accessToken `no-secret`. That client's scopes are
+        `['test:*', 'auth:create-client:test:*']`.  The call fails if the
+        `test:authenticate-get` scope is not available.
+
+        The request is validated, with any certificate, authorizedScopes, etc.
+        applied, and the resulting scopes are checked, just like any API call.
+        On success, the response contains the clientId and scopes as seen by
+        the API method.
+
+        This method may later be extended to allow specification of client and
+        required scopes via query arguments.
+
+        This method gives output: ``v1/test-authenticate-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["testAuthenticateGet"], *args, **kwargs)
+
+    funcinfo = {
+        "authenticateHawk": {
+            'args': [],
+            'input': 'v1/authenticate-hawk-request.json#',
+            'method': 'post',
+            'name': 'authenticateHawk',
+            'output': 'v1/authenticate-hawk-response.json#',
+            'route': '/authenticate-hawk',
+            'stability': 'stable',
+        },
+        "awsS3Credentials": {
+            'args': ['level', 'bucket', 'prefix'],
+            'method': 'get',
+            'name': 'awsS3Credentials',
+            'output': 'v1/aws-s3-credentials-response.json#',
+            'query': ['format'],
+            'route': '/aws/s3/<level>/<bucket>/<prefix>',
+            'stability': 'stable',
+        },
+        "azureAccounts": {
+            'args': [],
+            'method': 'get',
+            'name': 'azureAccounts',
+            'output': 'v1/azure-account-list-response.json#',
+            'route': '/azure/accounts',
+            'stability': 'stable',
+        },
+        "azureContainerSAS": {
+            'args': ['account', 'container', 'level'],
+            'method': 'get',
+            'name': 'azureContainerSAS',
+            'output': 'v1/azure-container-response.json#',
+            'route': '/azure/<account>/containers/<container>/<level>',
+            'stability': 'stable',
+        },
+        "azureContainers": {
+            'args': ['account'],
+            'method': 'get',
+            'name': 'azureContainers',
+            'output': 'v1/azure-container-list-response.json#',
+            'query': ['continuationToken'],
+            'route': '/azure/<account>/containers',
+            'stability': 'stable',
+        },
+        "azureTableSAS": {
+            'args': ['account', 'table', 'level'],
+            'method': 'get',
+            'name': 'azureTableSAS',
+            'output': 'v1/azure-table-access-response.json#',
+            'route': '/azure/<account>/table/<table>/<level>',
+            'stability': 'stable',
+        },
+        "azureTables": {
+            'args': ['account'],
+            'method': 'get',
+            'name': 'azureTables',
+            'output': 'v1/azure-table-list-response.json#',
+            'query': ['continuationToken'],
+            'route': '/azure/<account>/tables',
+            'stability': 'stable',
+        },
+        "client": {
+            'args': ['clientId'],
+            'method': 'get',
+            'name': 'client',
+            'output': 'v1/get-client-response.json#',
+            'route': '/clients/<clientId>',
+            'stability': 'stable',
+        },
+        "createClient": {
+            'args': ['clientId'],
+            'input': 'v1/create-client-request.json#',
+            'method': 'put',
+            'name': 'createClient',
+            'output': 'v1/create-client-response.json#',
+            'route': '/clients/<clientId>',
+            'stability': 'stable',
+        },
+        "createRole": {
+            'args': ['roleId'],
+            'input': 'v1/create-role-request.json#',
+            'method': 'put',
+            'name': 'createRole',
+            'output': 'v1/get-role-response.json#',
+            'route': '/roles/<roleId>',
+            'stability': 'stable',
+        },
+        "currentScopes": {
+            'args': [],
+            'method': 'get',
+            'name': 'currentScopes',
+            'output': 'v1/scopeset.json#',
+            'route': '/scopes/current',
+            'stability': 'stable',
+        },
+        "deleteClient": {
+            'args': ['clientId'],
+            'method': 'delete',
+            'name': 'deleteClient',
+            'route': '/clients/<clientId>',
+            'stability': 'stable',
+        },
+        "deleteRole": {
+            'args': ['roleId'],
+            'method': 'delete',
+            'name': 'deleteRole',
+            'route': '/roles/<roleId>',
+            'stability': 'stable',
+        },
+        "disableClient": {
+            'args': ['clientId'],
+            'method': 'post',
+            'name': 'disableClient',
+            'output': 'v1/get-client-response.json#',
+            'route': '/clients/<clientId>/disable',
+            'stability': 'stable',
+        },
+        "enableClient": {
+            'args': ['clientId'],
+            'method': 'post',
+            'name': 'enableClient',
+            'output': 'v1/get-client-response.json#',
+            'route': '/clients/<clientId>/enable',
+            'stability': 'stable',
+        },
+        "expandScopes": {
+            'args': [],
+            'input': 'v1/scopeset.json#',
+            'method': 'post',
+            'name': 'expandScopes',
+            'output': 'v1/scopeset.json#',
+            'route': '/scopes/expand',
+            'stability': 'stable',
+        },
+        "expandScopesGet": {
+            'args': [],
+            'input': 'v1/scopeset.json#',
+            'method': 'get',
+            'name': 'expandScopesGet',
+            'output': 'v1/scopeset.json#',
+            'route': '/scopes/expand',
+            'stability': 'deprecated',
+        },
+        "listClients": {
+            'args': [],
+            'method': 'get',
+            'name': 'listClients',
+            'output': 'v1/list-clients-response.json#',
+            'query': ['prefix', 'continuationToken', 'limit'],
+            'route': '/clients/',
+            'stability': 'stable',
+        },
+        "listRoles": {
+            'args': [],
+            'method': 'get',
+            'name': 'listRoles',
+            'output': 'v1/list-roles-response.json#',
+            'route': '/roles/',
+            'stability': 'stable',
+        },
+        "ping": {
+            'args': [],
+            'method': 'get',
+            'name': 'ping',
+            'route': '/ping',
+            'stability': 'stable',
+        },
+        "resetAccessToken": {
+            'args': ['clientId'],
+            'method': 'post',
+            'name': 'resetAccessToken',
+            'output': 'v1/create-client-response.json#',
+            'route': '/clients/<clientId>/reset',
+            'stability': 'stable',
+        },
+        "role": {
+            'args': ['roleId'],
+            'method': 'get',
+            'name': 'role',
+            'output': 'v1/get-role-response.json#',
+            'route': '/roles/<roleId>',
+            'stability': 'stable',
+        },
+        "sentryDSN": {
+            'args': ['project'],
+            'method': 'get',
+            'name': 'sentryDSN',
+            'output': 'v1/sentry-dsn-response.json#',
+            'route': '/sentry/<project>/dsn',
+            'stability': 'stable',
+        },
+        "statsumToken": {
+            'args': ['project'],
+            'method': 'get',
+            'name': 'statsumToken',
+            'output': 'v1/statsum-token-response.json#',
+            'route': '/statsum/<project>/token',
+            'stability': 'stable',
+        },
+        "testAuthenticate": {
+            'args': [],
+            'input': 'v1/test-authenticate-request.json#',
+            'method': 'post',
+            'name': 'testAuthenticate',
+            'output': 'v1/test-authenticate-response.json#',
+            'route': '/test-authenticate',
+            'stability': 'stable',
+        },
+        "testAuthenticateGet": {
+            'args': [],
+            'method': 'get',
+            'name': 'testAuthenticateGet',
+            'output': 'v1/test-authenticate-response.json#',
+            'route': '/test-authenticate-get/',
+            'stability': 'stable',
+        },
+        "updateClient": {
+            'args': ['clientId'],
+            'input': 'v1/create-client-request.json#',
+            'method': 'post',
+            'name': 'updateClient',
+            'output': 'v1/get-client-response.json#',
+            'route': '/clients/<clientId>',
+            'stability': 'stable',
+        },
+        "updateRole": {
+            'args': ['roleId'],
+            'input': 'v1/create-role-request.json#',
+            'method': 'post',
+            'name': 'updateRole',
+            'output': 'v1/get-role-response.json#',
+            'route': '/roles/<roleId>',
+            'stability': 'stable',
+        },
+        "webhooktunnelToken": {
+            'args': [],
+            'method': 'get',
+            'name': 'webhooktunnelToken',
+            'output': 'v1/webhooktunnel-token-response.json#',
+            'route': '/webhooktunnel',
+            'stability': 'stable',
+        },
+    }
+
+
+__all__ = ['createTemporaryCredentials', 'config', '_defaultConfig', 'createApiClient', 'createSession', 'Auth']
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/authevents.py
@@ -0,0 +1,178 @@
+# coding=utf-8
+#####################################################
+# THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT #
+#####################################################
+# noqa: E128,E201
+from .asyncclient import AsyncBaseClient
+from .asyncclient import createApiClient
+from .asyncclient import config
+from .asyncclient import createTemporaryCredentials
+from .asyncclient import createSession
+_defaultConfig = config
+
+
+class AuthEvents(AsyncBaseClient):
+    """
+    The auth service, typically available at `auth.taskcluster.net`
+    is responsible for storing credentials, managing assignment of scopes,
+    and validation of request signatures from other services.
+
+    These exchanges provides notifications when credentials or roles are
+    updated. This is mostly so that multiple instances of the auth service
+    can purge their caches and synchronize state. But you are of course
+    welcome to use these for other purposes, monitoring changes for example.
+    """
+
+    classOptions = {
+        "exchangePrefix": "exchange/taskcluster-auth/v1/"
+    }
+
+    def clientCreated(self, *args, **kwargs):
+        """
+        Client Created Messages
+
+        Message that a new client has been created.
+
+        This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
+
+         * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+        """
+
+        ref = {
+            'exchange': 'client-created',
+            'name': 'clientCreated',
+            'routingKey': [
+                {
+                    'multipleWords': True,
+                    'name': 'reserved',
+                },
+            ],
+            'schema': 'v1/client-message.json#',
+        }
+        return self._makeTopicExchange(ref, *args, **kwargs)
+
+    def clientUpdated(self, *args, **kwargs):
+        """
+        Client Updated Messages
+
+        Message that a new client has been updated.
+
+        This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
+
+         * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+        """
+
+        ref = {
+            'exchange': 'client-updated',
+            'name': 'clientUpdated',
+            'routingKey': [
+                {
+                    'multipleWords': True,
+                    'name': 'reserved',
+                },
+            ],
+            'schema': 'v1/client-message.json#',
+        }
+        return self._makeTopicExchange(ref, *args, **kwargs)
+
+    def clientDeleted(self, *args, **kwargs):
+        """
+        Client Deleted Messages
+
+        Message that a new client has been deleted.
+
+        This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
+
+         * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+        """
+
+        ref = {
+            'exchange': 'client-deleted',
+            'name': 'clientDeleted',
+            'routingKey': [
+                {
+                    'multipleWords': True,
+                    'name': 'reserved',
+                },
+            ],
+            'schema': 'v1/client-message.json#',
+        }
+        return self._makeTopicExchange(ref, *args, **kwargs)
+
+    def roleCreated(self, *args, **kwargs):
+        """
+        Role Created Messages
+
+        Message that a new role has been created.
+
+        This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
+
+         * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+        """
+
+        ref = {
+            'exchange': 'role-created',
+            'name': 'roleCreated',
+            'routingKey': [
+                {
+                    'multipleWords': True,
+                    'name': 'reserved',
+                },
+            ],
+            'schema': 'v1/role-message.json#',
+        }
+        return self._makeTopicExchange(ref, *args, **kwargs)
+
+    def roleUpdated(self, *args, **kwargs):
+        """
+        Role Updated Messages
+
+        Message that a new role has been updated.
+
+        This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
+
+         * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+        """
+
+        ref = {
+            'exchange': 'role-updated',
+            'name': 'roleUpdated',
+            'routingKey': [
+                {
+                    'multipleWords': True,
+                    'name': 'reserved',
+                },
+            ],
+            'schema': 'v1/role-message.json#',
+        }
+        return self._makeTopicExchange(ref, *args, **kwargs)
+
+    def roleDeleted(self, *args, **kwargs):
+        """
+        Role Deleted Messages
+
+        Message that a new role has been deleted.
+
+        This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
+
+         * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
+        """
+
+        ref = {
+            'exchange': 'role-deleted',
+            'name': 'roleDeleted',
+            'routingKey': [
+                {
+                    'multipleWords': True,
+                    'name': 'reserved',
+                },
+            ],
+            'schema': 'v1/role-message.json#',
+        }
+        return self._makeTopicExchange(ref, *args, **kwargs)
+
+    funcinfo = {
+    }
+
+
+__all__ = ['createTemporaryCredentials', 'config', '_defaultConfig', 'createApiClient', 'createSession', 'AuthEvents']
new file mode 100644
--- /dev/null
+++ b/third_party/python/taskcluster/taskcluster/aio/awsprovisioner.py
@@ -0,0 +1,449 @@
+# coding=utf-8
+#####################################################
+# THIS FILE IS AUTOMATICALLY GENERATED. DO NOT EDIT #
+#####################################################
+# noqa: E128,E201
+from .asyncclient import AsyncBaseClient
+from .asyncclient import createApiClient
+from .asyncclient import config
+from .asyncclient import createTemporaryCredentials
+from .asyncclient import createSession
+_defaultConfig = config
+
+
+class AwsProvisioner(AsyncBaseClient):
+    """
+    The AWS Provisioner is responsible for provisioning instances on EC2 for use in
+    Taskcluster.  The provisioner maintains a set of worker configurations which
+    can be managed with an API that is typically available at
+    aws-provisioner.taskcluster.net/v1.  This API can also perform basic instance
+    management tasks in addition to maintaining the internal state of worker type
+    configuration information.
+
+    The Provisioner runs at a configurable interval.  Each iteration of the
+    provisioner fetches a current copy the state that the AWS EC2 api reports.  In
+    each iteration, we ask the Queue how many tasks are pending for that worker
+    type.  Based on the number of tasks pending and the scaling ratio, we may
+    submit requests for new instances.  We use pricing information, capacity and
+    utility factor information to decide which instance type in which region would
+    be the optimal configuration.
+
+    Each EC2 instance type will declare a capacity and utility factor.  Capacity is
+    the number of tasks that a given machine is capable of running concurrently.
+    Utility factor is a relative measure of performance between two instance types.
+    We multiply the utility factor by the spot price to compare instance types and
+    regions when making the bidding choices.
+
+    When a new EC2 instance is instantiated, its user data contains a token in
+    `securityToken` that can be used with the `getSecret` method to retrieve
+    the worker's credentials and any needed passwords or other restricted
+    information.  The worker is responsible for deleting the secret after
+    retrieving it, to prevent dissemination of the secret to other proceses
+    which can read the instance user data.
+
+    """
+
+    classOptions = {
+        "baseUrl": "https://aws-provisioner.taskcluster.net/v1"
+    }
+
+    async def listWorkerTypeSummaries(self, *args, **kwargs):
+        """
+        List worker types with details
+
+        Return a list of worker types, including some summary information about
+        current capacity for each.  While this list includes all defined worker types,
+        there may be running EC2 instances for deleted worker types that are not
+        included here.  The list is unordered.
+
+        This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-summaries-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["listWorkerTypeSummaries"], *args, **kwargs)
+
+    async def createWorkerType(self, *args, **kwargs):
+        """
+        Create new Worker Type
+
+        Create a worker type.  A worker type contains all the configuration
+        needed for the provisioner to manage the instances.  Each worker type
+        knows which regions and which instance types are allowed for that
+        worker type.  Remember that Capacity is the number of concurrent tasks
+        that can be run on a given EC2 resource and that Utility is the relative
+        performance rate between different instance types.  There is no way to
+        configure different regions to have different sets of instance types
+        so ensure that all instance types are available in all regions.
+        This function is idempotent.
+
+        Once a worker type is in the provisioner, a back ground process will
+        begin creating instances for it based on its capacity bounds and its
+        pending task count from the Queue.  It is the worker's responsibility
+        to shut itself down.  The provisioner has a limit (currently 96hours)
+        for all instances to prevent zombie instances from running indefinitely.
+
+        The provisioner will ensure that all instances created are tagged with
+        aws resource tags containing the provisioner id and the worker type.
+
+        If provided, the secrets in the global, region and instance type sections
+        are available using the secrets api.  If specified, the scopes provided
+        will be used to generate a set of temporary credentials available with
+        the other secrets.
+
+        This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
+
+        This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["createWorkerType"], *args, **kwargs)
+
+    async def updateWorkerType(self, *args, **kwargs):
+        """
+        Update Worker Type
+
+        Provide a new copy of a worker type to replace the existing one.
+        This will overwrite the existing worker type definition if there
+        is already a worker type of that name.  This method will return a
+        200 response along with a copy of the worker type definition created
+        Note that if you are using the result of a GET on the worker-type
+        end point that you will need to delete the lastModified and workerType
+        keys from the object returned, since those fields are not allowed
+        the request body for this method
+
+        Otherwise, all input requirements and actions are the same as the
+        create method.
+
+        This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
+
+        This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
+
+        This method is ``stable``
+        """
+
+        return await self._makeApiCall(self.funcinfo["updateWorkerType"], *args, **kwargs)
+
+    async def workerTypeLastModified(self, *args, **kwargs):
+        """
+        Get Worker Type Last Modified Time
+
+        This method is provided to allow workers to see when they were
+        last modified.  The value provided through UserData can be
+        compared against this value to see if changes ha