js/src/gc/GC.cpp
author Jon Coppeard <jcoppeard@mozilla.com>
Fri, 23 Nov 2018 11:11:22 +0000
changeset 447793 2094cc4738e87fd6ed68fe52ae5675aadc822876
parent 447338 d6591e3f56bbe90561faeb01470cca02d9085e36
child 447956 3e274fb14fa0e03a104db8ef001fb1684e9c8ac7
permissions -rw-r--r--
Bug 1509322 - Relax some ChunkPool assertions which could make debug build GCs very slow r=pbone

/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
 * vim: set ts=8 sts=4 et sw=4 tw=99:
 * This Source Code Form is subject to the terms of the Mozilla Public
 * License, v. 2.0. If a copy of the MPL was not distributed with this
 * file, You can obtain one at http://mozilla.org/MPL/2.0/. */

/*
 * [SMDOC] Garbage Collector
 *
 * This code implements an incremental mark-and-sweep garbage collector, with
 * most sweeping carried out in the background on a parallel thread.
 *
 * Full vs. zone GC
 * ----------------
 *
 * The collector can collect all zones at once, or a subset. These types of
 * collection are referred to as a full GC and a zone GC respectively.
 *
 * It is possible for an incremental collection that started out as a full GC to
 * become a zone GC if new zones are created during the course of the
 * collection.
 *
 * Incremental collection
 * ----------------------
 *
 * For a collection to be carried out incrementally the following conditions
 * must be met:
 *  - the collection must be run by calling js::GCSlice() rather than js::GC()
 *  - the GC mode must have been set to JSGC_MODE_INCREMENTAL with
 *    JS_SetGCParameter()
 *  - no thread may have an AutoKeepAtoms instance on the stack
 *
 * The last condition is an engine-internal mechanism to ensure that incremental
 * collection is not carried out without the correct barriers being implemented.
 * For more information see 'Incremental marking' below.
 *
 * If the collection is not incremental, all foreground activity happens inside
 * a single call to GC() or GCSlice(). However the collection is not complete
 * until the background sweeping activity has finished.
 *
 * An incremental collection proceeds as a series of slices, interleaved with
 * mutator activity, i.e. running JavaScript code. Slices are limited by a time
 * budget. The slice finishes as soon as possible after the requested time has
 * passed.
 *
 * Collector states
 * ----------------
 *
 * The collector proceeds through the following states, the current state being
 * held in JSRuntime::gcIncrementalState:
 *
 *  - MarkRoots  - marks the stack and other roots
 *  - Mark       - incrementally marks reachable things
 *  - Sweep      - sweeps zones in groups and continues marking unswept zones
 *  - Finalize   - performs background finalization, concurrent with mutator
 *  - Compact    - incrementally compacts by zone
 *  - Decommit   - performs background decommit and chunk removal
 *
 * The MarkRoots activity always takes place in the first slice. The next two
 * states can take place over one or more slices.
 *
 * In other words an incremental collection proceeds like this:
 *
 * Slice 1:   MarkRoots:  Roots pushed onto the mark stack.
 *            Mark:       The mark stack is processed by popping an element,
 *                        marking it, and pushing its children.
 *
 *          ... JS code runs ...
 *
 * Slice 2:   Mark:       More mark stack processing.
 *
 *          ... JS code runs ...
 *
 * Slice n-1: Mark:       More mark stack processing.
 *
 *          ... JS code runs ...
 *
 * Slice n:   Mark:       Mark stack is completely drained.
 *            Sweep:      Select first group of zones to sweep and sweep them.
 *
 *          ... JS code runs ...
 *
 * Slice n+1: Sweep:      Mark objects in unswept zones that were newly
 *                        identified as alive (see below). Then sweep more zone
 *                        sweep groups.
 *
 *          ... JS code runs ...
 *
 * Slice n+2: Sweep:      Mark objects in unswept zones that were newly
 *                        identified as alive. Then sweep more zones.
 *
 *          ... JS code runs ...
 *
 * Slice m:   Sweep:      Sweeping is finished, and background sweeping
 *                        started on the helper thread.
 *
 *          ... JS code runs, remaining sweeping done on background thread ...
 *
 * When background sweeping finishes the GC is complete.
 *
 * Incremental marking
 * -------------------
 *
 * Incremental collection requires close collaboration with the mutator (i.e.,
 * JS code) to guarantee correctness.
 *
 *  - During an incremental GC, if a memory location (except a root) is written
 *    to, then the value it previously held must be marked. Write barriers
 *    ensure this.
 *
 *  - Any object that is allocated during incremental GC must start out marked.
 *
 *  - Roots are marked in the first slice and hence don't need write barriers.
 *    Roots are things like the C stack and the VM stack.
 *
 * The problem that write barriers solve is that between slices the mutator can
 * change the object graph. We must ensure that it cannot do this in such a way
 * that makes us fail to mark a reachable object (marking an unreachable object
 * is tolerable).
 *
 * We use a snapshot-at-the-beginning algorithm to do this. This means that we
 * promise to mark at least everything that is reachable at the beginning of
 * collection. To implement it we mark the old contents of every non-root memory
 * location written to by the mutator while the collection is in progress, using
 * write barriers. This is described in gc/Barrier.h.
 *
 * Incremental sweeping
 * --------------------
 *
 * Sweeping is difficult to do incrementally because object finalizers must be
 * run at the start of sweeping, before any mutator code runs. The reason is
 * that some objects use their finalizers to remove themselves from caches. If
 * mutator code was allowed to run after the start of sweeping, it could observe
 * the state of the cache and create a new reference to an object that was just
 * about to be destroyed.
 *
 * Sweeping all finalizable objects in one go would introduce long pauses, so
 * instead sweeping broken up into groups of zones. Zones which are not yet
 * being swept are still marked, so the issue above does not apply.
 *
 * The order of sweeping is restricted by cross compartment pointers - for
 * example say that object |a| from zone A points to object |b| in zone B and
 * neither object was marked when we transitioned to the Sweep phase. Imagine we
 * sweep B first and then return to the mutator. It's possible that the mutator
 * could cause |a| to become alive through a read barrier (perhaps it was a
 * shape that was accessed via a shape table). Then we would need to mark |b|,
 * which |a| points to, but |b| has already been swept.
 *
 * So if there is such a pointer then marking of zone B must not finish before
 * marking of zone A.  Pointers which form a cycle between zones therefore
 * restrict those zones to being swept at the same time, and these are found
 * using Tarjan's algorithm for finding the strongly connected components of a
 * graph.
 *
 * GC things without finalizers, and things with finalizers that are able to run
 * in the background, are swept on the background thread. This accounts for most
 * of the sweeping work.
 *
 * Reset
 * -----
 *
 * During incremental collection it is possible, although unlikely, for
 * conditions to change such that incremental collection is no longer safe. In
 * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in
 * the mark state, this just stops marking, but if we have started sweeping
 * already, we continue until we have swept the current sweep group. Following a
 * reset, a new non-incremental collection is started.
 *
 * Compacting GC
 * -------------
 *
 * Compacting GC happens at the end of a major GC as part of the last slice.
 * There are three parts:
 *
 *  - Arenas are selected for compaction.
 *  - The contents of those arenas are moved to new arenas.
 *  - All references to moved things are updated.
 *
 * Collecting Atoms
 * ----------------
 *
 * Atoms are collected differently from other GC things. They are contained in
 * a special zone and things in other zones may have pointers to them that are
 * not recorded in the cross compartment pointer map. Each zone holds a bitmap
 * with the atoms it might be keeping alive, and atoms are only collected if
 * they are not included in any zone's atom bitmap. See AtomMarking.cpp for how
 * this bitmap is managed.
 */

#include "gc/GC-inl.h"

#include "mozilla/ArrayUtils.h"
#include "mozilla/DebugOnly.h"
#include "mozilla/MacroForEach.h"
#include "mozilla/MemoryReporting.h"
#include "mozilla/Move.h"
#include "mozilla/Range.h"
#include "mozilla/ScopeExit.h"
#include "mozilla/TimeStamp.h"
#include "mozilla/TypeTraits.h"
#include "mozilla/Unused.h"

#include <ctype.h>
#include <initializer_list>
#include <string.h>
#ifndef XP_WIN
# include <sys/mman.h>
# include <unistd.h>
#endif

#include "jsapi.h"
#include "jsfriendapi.h"
#include "jstypes.h"
#include "jsutil.h"

#include "gc/FindSCCs.h"
#include "gc/FreeOp.h"
#include "gc/GCInternals.h"
#include "gc/GCTrace.h"
#include "gc/Memory.h"
#include "gc/Policy.h"
#include "gc/WeakMap.h"
#include "jit/BaselineJIT.h"
#include "jit/IonCode.h"
#include "jit/JitcodeMap.h"
#include "jit/JitRealm.h"
#include "jit/MacroAssembler.h"
#include "js/SliceBudget.h"
#include "proxy/DeadObjectProxy.h"
#include "util/Windows.h"
#ifdef ENABLE_BIGINT
#include "vm/BigIntType.h"
#endif
#include "vm/Debugger.h"
#include "vm/GeckoProfiler.h"
#include "vm/JSAtom.h"
#include "vm/JSContext.h"
#include "vm/JSObject.h"
#include "vm/JSScript.h"
#include "vm/Printer.h"
#include "vm/ProxyObject.h"
#include "vm/Realm.h"
#include "vm/Shape.h"
#include "vm/StringType.h"
#include "vm/SymbolType.h"
#include "vm/Time.h"
#include "vm/TraceLogging.h"
#include "vm/WrapperObject.h"

#include "gc/Heap-inl.h"
#include "gc/Marking-inl.h"
#include "gc/Nursery-inl.h"
#include "gc/PrivateIterators-inl.h"
#include "gc/Zone-inl.h"
#include "vm/GeckoProfiler-inl.h"
#include "vm/JSObject-inl.h"
#include "vm/JSScript-inl.h"
#include "vm/Stack-inl.h"
#include "vm/StringType-inl.h"

using namespace js;
using namespace js::gc;

using mozilla::ArrayLength;
using mozilla::Maybe;
using mozilla::Swap;
using mozilla::TimeStamp;
using mozilla::TimeDuration;

using JS::AutoGCRooter;

/*
 * Default settings for tuning the GC.  Some of these can be set at runtime,
 * This list is not complete, some tuning parameters are not listed here.
 *
 * If you change the values here, please also consider changing them in
 * modules/libpref/init/all.js where they are duplicated for the Firefox
 * preferences.
 */
namespace js {
namespace gc {
namespace TuningDefaults {

    /* JSGC_ALLOCATION_THRESHOLD */
    static const size_t GCZoneAllocThresholdBase = 30 * 1024 * 1024;

    /* JSGC_MAX_MALLOC_BYTES */
    static const size_t MaxMallocBytes = 128 * 1024 * 1024;

    /* JSGC_ALLOCATION_THRESHOLD_FACTOR */
    static const float AllocThresholdFactor = 0.9f;

    /* JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT */
    static const float AllocThresholdFactorAvoidInterrupt = 0.9f;

    /* no parameter */
    static const float MallocThresholdGrowFactor = 1.5f;

    /* no parameter */
    static const float MallocThresholdShrinkFactor = 0.9f;

    /* no parameter */
    static const size_t MallocThresholdLimit = 1024 * 1024 * 1024;

    /* no parameter */
    static const size_t ZoneAllocDelayBytes = 1024 * 1024;

    /* JSGC_DYNAMIC_HEAP_GROWTH */
    static const bool DynamicHeapGrowthEnabled = false;

    /* JSGC_HIGH_FREQUENCY_TIME_LIMIT */
    static const auto HighFrequencyThreshold = 1; // in seconds

    /* JSGC_HIGH_FREQUENCY_LOW_LIMIT */
    static const size_t HighFrequencyLowLimitBytes = 100 * 1024 * 1024;

    /* JSGC_HIGH_FREQUENCY_HIGH_LIMIT */
    static const size_t HighFrequencyHighLimitBytes = 500 * 1024 * 1024;

    /* JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX */
    static const float HighFrequencyHeapGrowthMax = 3.0f;

    /* JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN */
    static const float HighFrequencyHeapGrowthMin = 1.5f;

    /* JSGC_LOW_FREQUENCY_HEAP_GROWTH */
    static const float LowFrequencyHeapGrowth = 1.5f;

    /* JSGC_DYNAMIC_MARK_SLICE */
    static const bool DynamicMarkSliceEnabled = false;

    /* JSGC_MIN_EMPTY_CHUNK_COUNT */
    static const uint32_t MinEmptyChunkCount = 1;

    /* JSGC_MAX_EMPTY_CHUNK_COUNT */
    static const uint32_t MaxEmptyChunkCount = 30;

    /* JSGC_SLICE_TIME_BUDGET */
    static const int64_t DefaultTimeBudget = SliceBudget::UnlimitedTimeBudget;

    /* JSGC_MODE */
    static const JSGCMode Mode = JSGC_MODE_INCREMENTAL;

    /* JSGC_COMPACTING_ENABLED */
    static const bool CompactingEnabled = true;

    /* JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION */
    static const uint32_t NurseryFreeThresholdForIdleCollection =
        Nursery::NurseryChunkUsableSize / 4;

}}} // namespace js::gc::TuningDefaults

/*
 * We start to incremental collection for a zone when a proportion of its
 * threshold is reached. This is configured by the
 * JSGC_ALLOCATION_THRESHOLD_FACTOR and
 * JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT parameters.
 */
static const float MinAllocationThresholdFactor = 0.9f;

/*
 * We may start to collect a zone before its trigger threshold is reached if
 * GCRuntime::maybeGC() is called for that zone or we start collecting other
 * zones. These eager threshold factors are not configurable.
 */
static const float HighFrequencyEagerAllocTriggerFactor = 0.85f;
static const float LowFrequencyEagerAllocTriggerFactor = 0.9f;

/*
 * Don't allow heap growth factors to be set so low that collections could
 * reduce the trigger threshold.
 */
static const float MinHighFrequencyHeapGrowthFactor =
    1.0f / Min(HighFrequencyEagerAllocTriggerFactor, MinAllocationThresholdFactor);
static const float MinLowFrequencyHeapGrowthFactor =
    1.0f / Min(LowFrequencyEagerAllocTriggerFactor, MinAllocationThresholdFactor);

/* Increase the IGC marking slice time if we are in highFrequencyGC mode. */
static const int IGC_MARK_SLICE_MULTIPLIER = 2;

const AllocKind gc::slotsToThingKind[] = {
    // clang-format off
    /*  0 */ AllocKind::OBJECT0,  AllocKind::OBJECT2,  AllocKind::OBJECT2,  AllocKind::OBJECT4,
    /*  4 */ AllocKind::OBJECT4,  AllocKind::OBJECT8,  AllocKind::OBJECT8,  AllocKind::OBJECT8,
    /*  8 */ AllocKind::OBJECT8,  AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12,
    /* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16,
    /* 16 */ AllocKind::OBJECT16
    // clang-format on
};

// Check that reserved bits of a Cell are compatible with our typical allocators
// since most derived classes will store a pointer in the first word.
static_assert(js::detail::LIFO_ALLOC_ALIGN > JS_BITMASK(Cell::ReservedBits),
              "Cell::ReservedBits should support LifoAlloc");
static_assert(CellAlignBytes > JS_BITMASK(Cell::ReservedBits),
              "Cell::ReservedBits should support gc::Cell");
static_assert(sizeof(uintptr_t) > JS_BITMASK(Cell::ReservedBits),
              "Cell::ReservedBits should support small malloc / aligned globals");
static_assert(js::jit::CodeAlignment > JS_BITMASK(Cell::ReservedBits),
              "Cell::ReservedBits should support JIT code");

static_assert(mozilla::ArrayLength(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT,
              "We have defined a slot count for each kind.");

#define CHECK_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
    static_assert(sizeof(sizedType) >= SortedArenaList::MinThingSize, \
                  #sizedType " is smaller than SortedArenaList::MinThingSize!"); \
    static_assert(sizeof(sizedType) >= sizeof(FreeSpan), \
                  #sizedType " is smaller than FreeSpan"); \
    static_assert(sizeof(sizedType) % CellAlignBytes == 0, \
                  "Size of " #sizedType " is not a multiple of CellAlignBytes"); \
    static_assert(sizeof(sizedType) >= MinCellSize, \
                  "Size of " #sizedType " is smaller than the minimum size");
FOR_EACH_ALLOCKIND(CHECK_THING_SIZE);
#undef CHECK_THING_SIZE

const uint32_t Arena::ThingSizes[] = {
#define EXPAND_THING_SIZE(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
    sizeof(sizedType),
FOR_EACH_ALLOCKIND(EXPAND_THING_SIZE)
#undef EXPAND_THING_SIZE
};

FreeSpan FreeLists::emptySentinel;

#undef CHECK_THING_SIZE_INNER
#undef CHECK_THING_SIZE

#define OFFSET(type) uint32_t(ArenaHeaderSize + (ArenaSize - ArenaHeaderSize) % sizeof(type))

const uint32_t Arena::FirstThingOffsets[] = {
#define EXPAND_FIRST_THING_OFFSET(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
    OFFSET(sizedType),
FOR_EACH_ALLOCKIND(EXPAND_FIRST_THING_OFFSET)
#undef EXPAND_FIRST_THING_OFFSET
};

#undef OFFSET

#define COUNT(type) uint32_t((ArenaSize - ArenaHeaderSize) / sizeof(type))

const uint32_t Arena::ThingsPerArena[] = {
#define EXPAND_THINGS_PER_ARENA(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
    COUNT(sizedType),
FOR_EACH_ALLOCKIND(EXPAND_THINGS_PER_ARENA)
#undef EXPAND_THINGS_PER_ARENA
};

#undef COUNT

struct js::gc::FinalizePhase
{
    gcstats::PhaseKind statsPhase;
    AllocKinds kinds;
};

/*
 * Finalization order for objects swept incrementally on the main thread.
 */
static const FinalizePhase ForegroundObjectFinalizePhase = {
    gcstats::PhaseKind::SWEEP_OBJECT, {
        AllocKind::OBJECT0,
        AllocKind::OBJECT2,
        AllocKind::OBJECT4,
        AllocKind::OBJECT8,
        AllocKind::OBJECT12,
        AllocKind::OBJECT16
    }
};

/*
 * Finalization order for GC things swept incrementally on the main thread.
 */
static const FinalizePhase ForegroundNonObjectFinalizePhase = {
    gcstats::PhaseKind::SWEEP_SCRIPT, {
        AllocKind::SCRIPT,
        AllocKind::JITCODE
    }
};

/*
 * Finalization order for GC things swept on the background thread.
 */
static const FinalizePhase BackgroundFinalizePhases[] = {
    {
        gcstats::PhaseKind::SWEEP_SCRIPT, {
            AllocKind::LAZY_SCRIPT
        }
    },
    {
        gcstats::PhaseKind::SWEEP_OBJECT, {
            AllocKind::FUNCTION,
            AllocKind::FUNCTION_EXTENDED,
            AllocKind::OBJECT0_BACKGROUND,
            AllocKind::OBJECT2_BACKGROUND,
            AllocKind::OBJECT4_BACKGROUND,
            AllocKind::OBJECT8_BACKGROUND,
            AllocKind::OBJECT12_BACKGROUND,
            AllocKind::OBJECT16_BACKGROUND
        }
    },
    {
        gcstats::PhaseKind::SWEEP_SCOPE, {
            AllocKind::SCOPE,
        }
    },
    {
        gcstats::PhaseKind::SWEEP_REGEXP_SHARED, {
            AllocKind::REGEXP_SHARED,
        }
    },
    {
        gcstats::PhaseKind::SWEEP_STRING, {
            AllocKind::FAT_INLINE_STRING,
            AllocKind::STRING,
            AllocKind::EXTERNAL_STRING,
            AllocKind::FAT_INLINE_ATOM,
            AllocKind::ATOM,
            AllocKind::SYMBOL,
#ifdef ENABLE_BIGINT
            AllocKind::BIGINT
#endif
        }
    },
    {
        gcstats::PhaseKind::SWEEP_SHAPE, {
            AllocKind::SHAPE,
            AllocKind::ACCESSOR_SHAPE,
            AllocKind::BASE_SHAPE,
            AllocKind::OBJECT_GROUP
        }
    }
};

template<>
JSObject*
ArenaCellIterImpl::get<JSObject>() const
{
    MOZ_ASSERT(!done());
    return reinterpret_cast<JSObject*>(getCell());
}

void
Arena::unmarkAll()
{
    uintptr_t* word = chunk()->bitmap.arenaBits(this);
    memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t));
}

void
Arena::unmarkPreMarkedFreeCells()
{
    for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
        TenuredCell* cell = iter.getCell();
        MOZ_ASSERT(cell->isMarkedBlack());
        cell->unmark();
    }
}

#ifdef DEBUG
void
Arena::checkNoMarkedFreeCells()
{
    for (ArenaFreeCellIter iter(this); !iter.done(); iter.next()) {
        MOZ_ASSERT(!iter.getCell()->isMarkedAny());
    }
}
#endif

/* static */ void
Arena::staticAsserts()
{
    static_assert(size_t(AllocKind::LIMIT) <= 255,
                  "We must be able to fit the allockind into uint8_t.");
    static_assert(mozilla::ArrayLength(ThingSizes) == size_t(AllocKind::LIMIT),
                  "We haven't defined all thing sizes.");
    static_assert(mozilla::ArrayLength(FirstThingOffsets) == size_t(AllocKind::LIMIT),
                  "We haven't defined all offsets.");
    static_assert(mozilla::ArrayLength(ThingsPerArena) == size_t(AllocKind::LIMIT),
                  "We haven't defined all counts.");
}

template<typename T>
inline size_t
Arena::finalize(FreeOp* fop, AllocKind thingKind, size_t thingSize)
{
    /* Enforce requirements on size of T. */
    MOZ_ASSERT(thingSize % CellAlignBytes == 0);
    MOZ_ASSERT(thingSize >= MinCellSize);
    MOZ_ASSERT(thingSize <= 255);

    MOZ_ASSERT(allocated());
    MOZ_ASSERT(thingKind == getAllocKind());
    MOZ_ASSERT(thingSize == getThingSize());
    MOZ_ASSERT(!hasDelayedMarking);
    MOZ_ASSERT(!markOverflow);

    uint_fast16_t firstThing = firstThingOffset(thingKind);
    uint_fast16_t firstThingOrSuccessorOfLastMarkedThing = firstThing;
    uint_fast16_t lastThing = ArenaSize - thingSize;

    FreeSpan newListHead;
    FreeSpan* newListTail = &newListHead;
    size_t nmarked = 0;

    for (ArenaCellIterUnderFinalize i(this); !i.done(); i.next()) {
        T* t = i.get<T>();
        if (t->asTenured().isMarkedAny()) {
            uint_fast16_t thing = uintptr_t(t) & ArenaMask;
            if (thing != firstThingOrSuccessorOfLastMarkedThing) {
                // We just finished passing over one or more free things,
                // so record a new FreeSpan.
                newListTail->initBounds(firstThingOrSuccessorOfLastMarkedThing,
                                        thing - thingSize, this);
                newListTail = newListTail->nextSpanUnchecked(this);
            }
            firstThingOrSuccessorOfLastMarkedThing = thing + thingSize;
            nmarked++;
        } else {
            t->finalize(fop);
            JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize, MemCheckKind::MakeUndefined);
            gcTracer.traceTenuredFinalize(t);
        }
    }

    if (nmarked == 0) {
        // Do nothing. The caller will update the arena appropriately.
        MOZ_ASSERT(newListTail == &newListHead);
        JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data), MemCheckKind::MakeUndefined);
        return nmarked;
    }

    MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing);
    uint_fast16_t lastMarkedThing = firstThingOrSuccessorOfLastMarkedThing - thingSize;
    if (lastThing == lastMarkedThing) {
        // If the last thing was marked, we will have already set the bounds of
        // the final span, and we just need to terminate the list.
        newListTail->initAsEmpty();
    } else {
        // Otherwise, end the list with a span that covers the final stretch of free things.
        newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing, this);
    }

    firstFreeSpan = newListHead;
#ifdef DEBUG
    size_t nfree = numFreeThings(thingSize);
    MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingKind));
#endif
    return nmarked;
}

// Finalize arenas from src list, releasing empty arenas if keepArenas wasn't
// specified and inserting the others into the appropriate destination size
// bins.
template<typename T>
static inline bool
FinalizeTypedArenas(FreeOp* fop,
                    Arena** src,
                    SortedArenaList& dest,
                    AllocKind thingKind,
                    SliceBudget& budget,
                    ArenaLists::KeepArenasEnum keepArenas)
{
    // When operating in the foreground, take the lock at the top.
    Maybe<AutoLockGC> maybeLock;
    if (fop->onMainThread()) {
        maybeLock.emplace(fop->runtime());
    }

    // During background sweeping free arenas are released later on in
    // sweepBackgroundThings().
    MOZ_ASSERT_IF(!fop->onMainThread(), keepArenas == ArenaLists::KEEP_ARENAS);

    size_t thingSize = Arena::thingSize(thingKind);
    size_t thingsPerArena = Arena::thingsPerArena(thingKind);

    while (Arena* arena = *src) {
        *src = arena->next;
        size_t nmarked = arena->finalize<T>(fop, thingKind, thingSize);
        size_t nfree = thingsPerArena - nmarked;

        if (nmarked) {
            dest.insertAt(arena, nfree);
        } else if (keepArenas == ArenaLists::KEEP_ARENAS) {
            arena->chunk()->recycleArena(arena, dest, thingsPerArena);
        } else {
            fop->runtime()->gc.releaseArena(arena, maybeLock.ref());
        }

        budget.step(thingsPerArena);
        if (budget.isOverBudget()) {
            return false;
        }
    }

    return true;
}

/*
 * Finalize the list. On return, |al|'s cursor points to the first non-empty
 * arena in the list (which may be null if all arenas are full).
 */
static bool
FinalizeArenas(FreeOp* fop,
               Arena** src,
               SortedArenaList& dest,
               AllocKind thingKind,
               SliceBudget& budget,
               ArenaLists::KeepArenasEnum keepArenas)
{
    switch (thingKind) {
#define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
      case AllocKind::allocKind: \
        return FinalizeTypedArenas<type>(fop, src, dest, thingKind, budget, keepArenas);
FOR_EACH_ALLOCKIND(EXPAND_CASE)
#undef EXPAND_CASE

      default:
        MOZ_CRASH("Invalid alloc kind");
    }
}

Chunk*
ChunkPool::pop()
{
    MOZ_ASSERT(bool(head_) == bool(count_));
    if (!count_) {
        return nullptr;
    }
    return remove(head_);
}

void
ChunkPool::push(Chunk* chunk)
{
    MOZ_ASSERT(!chunk->info.next);
    MOZ_ASSERT(!chunk->info.prev);

    chunk->info.next = head_;
    if (head_) {
        head_->info.prev = chunk;
    }
    head_ = chunk;
    ++count_;
}

Chunk*
ChunkPool::remove(Chunk* chunk)
{
    MOZ_ASSERT(count_ > 0);
    MOZ_ASSERT(contains(chunk));

    if (head_ == chunk) {
        head_ = chunk->info.next;
    }
    if (chunk->info.prev) {
        chunk->info.prev->info.next = chunk->info.next;
    }
    if (chunk->info.next) {
        chunk->info.next->info.prev = chunk->info.prev;
    }
    chunk->info.next = chunk->info.prev = nullptr;
    --count_;

    return chunk;
}

#ifdef DEBUG
bool
ChunkPool::contains(Chunk* chunk) const
{
    verify();
    for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) {
        if (cursor == chunk) {
            return true;
        }
    }
    return false;
}

bool
ChunkPool::verify() const
{
    MOZ_ASSERT(bool(head_) == bool(count_));
    uint32_t count = 0;
    for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) {
        MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor);
        MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor);
    }
    MOZ_ASSERT(count_ == count);
    return true;
}
#endif

void
ChunkPool::Iter::next()
{
    MOZ_ASSERT(!done());
    current_ = current_->info.next;
}

ChunkPool
GCRuntime::expireEmptyChunkPool(const AutoLockGC& lock)
{
    MOZ_ASSERT(emptyChunks(lock).verify());
    MOZ_ASSERT(tunables.minEmptyChunkCount(lock) <= tunables.maxEmptyChunkCount());

    ChunkPool expired;
    while (emptyChunks(lock).count() > tunables.minEmptyChunkCount(lock)) {
        Chunk* chunk = emptyChunks(lock).pop();
        prepareToFreeChunk(chunk->info);
        expired.push(chunk);
    }

    MOZ_ASSERT(expired.verify());
    MOZ_ASSERT(emptyChunks(lock).verify());
    MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount());
    MOZ_ASSERT(emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock));
    return expired;
}

static void
FreeChunkPool(ChunkPool& pool)
{
    for (ChunkPool::Iter iter(pool); !iter.done();) {
        Chunk* chunk = iter.get();
        iter.next();
        pool.remove(chunk);
        MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
        UnmapPages(static_cast<void*>(chunk), ChunkSize);
    }
    MOZ_ASSERT(pool.count() == 0);
}

void
GCRuntime::freeEmptyChunks(const AutoLockGC& lock)
{
    FreeChunkPool(emptyChunks(lock));
}

inline void
GCRuntime::prepareToFreeChunk(ChunkInfo& info)
{
    MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted);
    numArenasFreeCommitted -= info.numArenasFreeCommitted;
    stats().count(gcstats::COUNT_DESTROY_CHUNK);
#ifdef DEBUG
    /*
     * Let FreeChunkPool detect a missing prepareToFreeChunk call before it
     * frees chunk.
     */
    info.numArenasFreeCommitted = 0;
#endif
}

inline void
GCRuntime::updateOnArenaFree()
{
    ++numArenasFreeCommitted;
}

void
Chunk::addArenaToFreeList(JSRuntime* rt, Arena* arena)
{
    MOZ_ASSERT(!arena->allocated());
    arena->next = info.freeArenasHead;
    info.freeArenasHead = arena;
    ++info.numArenasFreeCommitted;
    ++info.numArenasFree;
    rt->gc.updateOnArenaFree();
}

void
Chunk::addArenaToDecommittedList(const Arena* arena)
{
    ++info.numArenasFree;
    decommittedArenas.set(Chunk::arenaIndex(arena->address()));
}

void
Chunk::recycleArena(Arena* arena, SortedArenaList& dest, size_t thingsPerArena)
{
    arena->setAsFullyUnused();
    dest.insertAt(arena, thingsPerArena);
}

void
Chunk::releaseArena(JSRuntime* rt, Arena* arena, const AutoLockGC& lock)
{
    MOZ_ASSERT(arena->allocated());
    MOZ_ASSERT(!arena->hasDelayedMarking);

    arena->release(lock);
    addArenaToFreeList(rt, arena);
    updateChunkListAfterFree(rt, lock);
}

bool
Chunk::decommitOneFreeArena(JSRuntime* rt, AutoLockGC& lock)
{
    MOZ_ASSERT(info.numArenasFreeCommitted > 0);
    Arena* arena = fetchNextFreeArena(rt);
    updateChunkListAfterAlloc(rt, lock);

    bool ok;
    {
        AutoUnlockGC unlock(lock);
        ok = MarkPagesUnused(arena, ArenaSize);
    }

    if (ok) {
        addArenaToDecommittedList(arena);
    } else {
        addArenaToFreeList(rt, arena);
    }
    updateChunkListAfterFree(rt, lock);

    return ok;
}

void
Chunk::decommitAllArenasWithoutUnlocking(const AutoLockGC& lock)
{
    for (size_t i = 0; i < ArenasPerChunk; ++i) {
        if (decommittedArenas.get(i) || arenas[i].allocated()) {
            continue;
        }

        if (MarkPagesUnused(&arenas[i], ArenaSize)) {
            info.numArenasFreeCommitted--;
            decommittedArenas.set(i);
        }
    }
}

void
Chunk::updateChunkListAfterAlloc(JSRuntime* rt, const AutoLockGC& lock)
{
    if (MOZ_UNLIKELY(!hasAvailableArenas())) {
        rt->gc.availableChunks(lock).remove(this);
        rt->gc.fullChunks(lock).push(this);
    }
}

void
Chunk::updateChunkListAfterFree(JSRuntime* rt, const AutoLockGC& lock)
{
    if (info.numArenasFree == 1) {
        rt->gc.fullChunks(lock).remove(this);
        rt->gc.availableChunks(lock).push(this);
    } else if (!unused()) {
        MOZ_ASSERT(rt->gc.availableChunks(lock).contains(this));
    } else {
        MOZ_ASSERT(unused());
        rt->gc.availableChunks(lock).remove(this);
        decommitAllArenas();
        MOZ_ASSERT(info.numArenasFreeCommitted == 0);
        rt->gc.recycleChunk(this, lock);
    }
}

void
GCRuntime::releaseArena(Arena* arena, const AutoLockGC& lock)
{
    arena->zone->usage.removeGCArena();
    arena->chunk()->releaseArena(rt, arena, lock);
}

GCRuntime::GCRuntime(JSRuntime* rt) :
    rt(rt),
    systemZone(nullptr),
    atomsZone(nullptr),
    stats_(rt),
    marker(rt),
    usage(nullptr),
    rootsHash(256),
    nextCellUniqueId_(LargestTaggedNullCellPointer + 1), // Ensure disjoint from null tagged pointers.
    numArenasFreeCommitted(0),
    verifyPreData(nullptr),
    chunkAllocationSinceLastGC(false),
    lastGCTime(ReallyNow()),
    mode(TuningDefaults::Mode),
    numActiveZoneIters(0),
    cleanUpEverything(false),
    grayBufferState(GCRuntime::GrayBufferState::Unused),
    grayBitsValid(false),
    majorGCTriggerReason(JS::gcreason::NO_REASON),
    fullGCForAtomsRequested_(false),
    minorGCNumber(0),
    majorGCNumber(0),
    number(0),
    isFull(false),
    incrementalState(gc::State::NotActive),
    initialState(gc::State::NotActive),
#ifdef JS_GC_ZEAL
    useZeal(false),
#endif
    lastMarkSlice(false),
    safeToYield(true),
    sweepOnBackgroundThread(false),
    blocksToFreeAfterSweeping((size_t) JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE),
    sweepGroupIndex(0),
    sweepGroups(nullptr),
    currentSweepGroup(nullptr),
    sweepZone(nullptr),
    abortSweepAfterCurrentGroup(false),
    startedCompacting(false),
    relocatedArenasToRelease(nullptr),
#ifdef JS_GC_ZEAL
    markingValidator(nullptr),
#endif
    defaultTimeBudget_(TuningDefaults::DefaultTimeBudget),
    incrementalAllowed(true),
    compactingEnabled(TuningDefaults::CompactingEnabled),
    rootsRemoved(false),
#ifdef JS_GC_ZEAL
    zealModeBits(0),
    zealFrequency(0),
    nextScheduled(0),
    deterministicOnly(false),
    incrementalLimit(0),
#endif
    fullCompartmentChecks(false),
    gcCallbackDepth(0),
    alwaysPreserveCode(false),
#ifdef DEBUG
    arenasEmptyAtShutdown(true),
#endif
    lock(mutexid::GCLock),
    allocTask(rt, emptyChunks_.ref()),
    sweepTask(rt),
    decommitTask(rt),
    nursery_(rt),
    storeBuffer_(rt, nursery()),
    blocksToFreeAfterMinorGC((size_t) JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE)
{
    setGCMode(JSGC_MODE_GLOBAL);
}

#ifdef JS_GC_ZEAL

void
GCRuntime::getZealBits(uint32_t* zealBits, uint32_t* frequency, uint32_t* scheduled)
{
    *zealBits = zealModeBits;
    *frequency = zealFrequency;
    *scheduled = nextScheduled;
}

const char gc::ZealModeHelpText[] =
    "  Specifies how zealous the garbage collector should be. Some of these modes can\n"
    "  be set simultaneously, by passing multiple level options, e.g. \"2;4\" will activate\n"
    "  both modes 2 and 4. Modes can be specified by name or number.\n"
    "  \n"
    "  Values:\n"
    "    0:  (None) Normal amount of collection (resets all modes)\n"
    "    1:  (RootsChange) Collect when roots are added or removed\n"
    "    2:  (Alloc) Collect when every N allocations (default: 100)\n"
    "    4:  (VerifierPre) Verify pre write barriers between instructions\n"
    "    7:  (GenerationalGC) Collect the nursery every N nursery allocations\n"
    "    8:  (YieldBeforeMarking) Incremental GC in two slices that yields between\n"
    "        the root marking and marking phases\n"
    "    9:  (YieldBeforeSweeping) Incremental GC in two slices that yields between\n"
    "        the marking and sweeping phases\n"
    "    10: (IncrementalMultipleSlices) Incremental GC in many slices\n"
    "    11: (IncrementalMarkingValidator) Verify incremental marking\n"
    "    12: (ElementsBarrier) Use the individual element post-write barrier\n"
    "        regardless of elements size\n"
    "    13: (CheckHashTablesOnMinorGC) Check internal hashtables on minor GC\n"
    "    14: (Compact) Perform a shrinking collection every N allocations\n"
    "    15: (CheckHeapAfterGC) Walk the heap to check its integrity after every GC\n"
    "    16: (CheckNursery) Check nursery integrity on minor GC\n"
    "    17: (YieldBeforeSweepingAtoms) Incremental GC in two slices that yields\n"
    "        before sweeping the atoms table\n"
    "    18: (CheckGrayMarking) Check gray marking invariants after every GC\n"
    "    19: (YieldBeforeSweepingCaches) Incremental GC in two slices that yields\n"
    "        before sweeping weak caches\n"
    "    20: (YieldBeforeSweepingTypes) Incremental GC in two slices that yields\n"
    "        before sweeping type information\n"
    "    21: (YieldBeforeSweepingObjects) Incremental GC in two slices that yields\n"
    "        before sweeping foreground finalized objects\n"
    "    22: (YieldBeforeSweepingNonObjects) Incremental GC in two slices that yields\n"
    "        before sweeping non-object GC things\n"
    "    23: (YieldBeforeSweepingShapeTrees) Incremental GC in two slices that yields\n"
    "        before sweeping shape trees\n";

// The set of zeal modes that control incremental slices. These modes are
// mutually exclusive.
static const mozilla::EnumSet<ZealMode> IncrementalSliceZealModes = {
    ZealMode::YieldBeforeMarking,
    ZealMode::YieldBeforeSweeping,
    ZealMode::IncrementalMultipleSlices,
    ZealMode::YieldBeforeSweepingAtoms,
    ZealMode::YieldBeforeSweepingCaches,
    ZealMode::YieldBeforeSweepingTypes,
    ZealMode::YieldBeforeSweepingObjects,
    ZealMode::YieldBeforeSweepingNonObjects,
    ZealMode::YieldBeforeSweepingShapeTrees
};

void
GCRuntime::setZeal(uint8_t zeal, uint32_t frequency)
{
    MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));

    if (temporaryAbortIfWasmGc(rt->mainContextFromOwnThread())) {
        return;
    }

    if (verifyPreData) {
        VerifyBarriers(rt, PreBarrierVerifier);
    }

    if (zeal == 0) {
        if (hasZealMode(ZealMode::GenerationalGC)) {
            evictNursery(JS::gcreason::DEBUG_GC);
            nursery().leaveZealMode();
        }

        if (isIncrementalGCInProgress()) {
            finishGC(JS::gcreason::DEBUG_GC);
        }
    }

    ZealMode zealMode = ZealMode(zeal);
    if (zealMode == ZealMode::GenerationalGC) {
        nursery().enterZealMode();
    }

    // Some modes are mutually exclusive. If we're setting one of those, we
    // first reset all of them.
    if (IncrementalSliceZealModes.contains(zealMode)) {
        for (auto mode : IncrementalSliceZealModes) {
            clearZealMode(mode);
        }
    }

    bool schedule = zealMode >= ZealMode::Alloc;
    if (zeal != 0) {
        zealModeBits |= 1 << unsigned(zeal);
    } else {
        zealModeBits = 0;
    }
    zealFrequency = frequency;
    nextScheduled = schedule ? frequency : 0;
}

void
GCRuntime::unsetZeal(uint8_t zeal)
{
    MOZ_ASSERT(zeal <= unsigned(ZealMode::Limit));
    ZealMode zealMode = ZealMode(zeal);

    if (temporaryAbortIfWasmGc(rt->mainContextFromOwnThread())) {
        return;
    }

    if (!hasZealMode(zealMode)) {
        return;
    }

    if (verifyPreData) {
        VerifyBarriers(rt, PreBarrierVerifier);
    }

    if (zealMode == ZealMode::GenerationalGC) {
        evictNursery(JS::gcreason::DEBUG_GC);
        nursery().leaveZealMode();
    }

    clearZealMode(zealMode);

    if (zealModeBits == 0) {
        if (isIncrementalGCInProgress()) {
            finishGC(JS::gcreason::DEBUG_GC);
        }

        zealFrequency = 0;
        nextScheduled = 0;
    }
}

void
GCRuntime::setNextScheduled(uint32_t count)
{
    nextScheduled = count;
}

using CharRange = mozilla::Range<const char>;
using CharRangeVector = Vector<CharRange, 0, SystemAllocPolicy>;

static bool
ParseZealModeName(CharRange text, uint32_t* modeOut)
{
    struct ModeInfo
    {
        const char* name;
        size_t length;
        uint32_t value;
    };

    static const ModeInfo zealModes[] = {
        {"None", 0},
#define ZEAL_MODE(name, value) {#name, strlen(#name), value},
        JS_FOR_EACH_ZEAL_MODE(ZEAL_MODE)
#undef ZEAL_MODE
    };

    for (auto mode : zealModes) {
        if (text.length() == mode.length &&
            memcmp(text.begin().get(), mode.name, mode.length) == 0)
        {
            *modeOut = mode.value;
            return true;
        }
    }

    return false;
}

static bool
ParseZealModeNumericParam(CharRange text, uint32_t* paramOut)
{
    if (text.length() == 0) {
        return false;
    }

    for (auto c : text) {
        if (!isdigit(c)) {
            return false;
        }
    }

    *paramOut = atoi(text.begin().get());
    return true;
}

static bool
SplitStringBy(CharRange text, char delimiter, CharRangeVector* result)
{
    auto start = text.begin();
    for (auto ptr = start; ptr != text.end(); ptr++) {
        if (*ptr == delimiter) {
            if (!result->emplaceBack(start, ptr)) {
                return false;
            }
            start = ptr + 1;
        }
    }

    return result->emplaceBack(start, text.end());
}

static bool
PrintZealHelpAndFail()
{
    fprintf(stderr, "Format: JS_GC_ZEAL=level(;level)*[,N]\n");
    fputs(ZealModeHelpText, stderr);
    return false;
}

bool
GCRuntime::parseAndSetZeal(const char* str)
{
    // Set the zeal mode from a string consisting of one or more mode specifiers
    // separated by ';', optionally followed by a ',' and the trigger frequency.
    // The mode specifiers can by a mode name or its number.

    auto text = CharRange(str, strlen(str));

    CharRangeVector parts;
    if (!SplitStringBy(text, ',', &parts)) {
        return false;
    }

    if (parts.length() == 0 || parts.length() > 2) {
        return PrintZealHelpAndFail();
    }

    uint32_t frequency = JS_DEFAULT_ZEAL_FREQ;
    if (parts.length() == 2 && !ParseZealModeNumericParam(parts[1], &frequency)) {
        return PrintZealHelpAndFail();
    }

    CharRangeVector modes;
    if (!SplitStringBy(parts[0], ';', &modes)) {
        return false;
    }

    for (const auto& descr : modes) {
        uint32_t mode;
        if (!ParseZealModeName(descr, &mode) && !ParseZealModeNumericParam(descr, &mode)) {
            return PrintZealHelpAndFail();
        }

        setZeal(mode, frequency);
    }

    return true;
}

const char*
js::gc::AllocKindName(AllocKind kind)
{
    static const char* const names[] = {
#define EXPAND_THING_NAME(allocKind, _1, _2, _3, _4, _5, _6) \
        #allocKind,
FOR_EACH_ALLOCKIND(EXPAND_THING_NAME)
#undef EXPAND_THING_NAME
    };
    static_assert(ArrayLength(names) == size_t(AllocKind::LIMIT),
                  "names array should have an entry for every AllocKind");

    size_t i = size_t(kind);
    MOZ_ASSERT(i < ArrayLength(names));
    return names[i];
}

void
js::gc::DumpArenaInfo()
{
    fprintf(stderr, "Arena header size: %zu\n\n", ArenaHeaderSize);

    fprintf(stderr, "GC thing kinds:\n");
    fprintf(stderr, "%25s %8s %8s %8s\n", "AllocKind:", "Size:", "Count:", "Padding:");
    for (auto kind : AllAllocKinds()) {
        fprintf(stderr,
                "%25s %8zu %8zu %8zu\n",
                AllocKindName(kind),
                Arena::thingSize(kind),
                Arena::thingsPerArena(kind),
                Arena::firstThingOffset(kind) - ArenaHeaderSize);
    }
}

#endif // JS_GC_ZEAL

bool
GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes)
{
    MOZ_ASSERT(SystemPageSize());

    {
        AutoLockGCBgAlloc lock(rt);

        MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
        MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_NURSERY_BYTES, maxNurseryBytes, lock));
        setMaxMallocBytes(TuningDefaults::MaxMallocBytes, lock);

        const char* size = getenv("JSGC_MARK_STACK_LIMIT");
        if (size) {
            setMarkStackLimit(atoi(size), lock);
        }

        if (!nursery().init(maxNurseryBytes, lock)) {
            return false;
        }
    }

#ifdef JS_GC_ZEAL
    const char* zealSpec = getenv("JS_GC_ZEAL");
    if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) {
        return false;
    }
#endif

    if (!gcTracer.initTrace(*this)) {
        return false;
    }

    if (!marker.init(mode)) {
        return false;
    }

    if (!initSweepActions()) {
        return false;
    }

    return true;
}

void
GCRuntime::finish()
{
    // Wait for nursery background free to end and disable it to release memory.
    if (nursery().isEnabled()) {
        nursery().waitBackgroundFreeEnd();
        nursery().disable();
    }

    // Wait until the background finalization and allocation stops and the
    // helper thread shuts down before we forcefully release any remaining GC
    // memory.
    sweepTask.join();
    allocTask.cancelAndWait();
    decommitTask.cancelAndWait();

#ifdef JS_GC_ZEAL
    // Free memory associated with GC verification.
    finishVerifier();
#endif

    // Delete all remaining zones.
    if (rt->gcInitialized) {
        AutoSetThreadIsSweeping threadIsSweeping;
        for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
            for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
                for (RealmsInCompartmentIter realm(comp); !realm.done(); realm.next()) {
                    js_delete(realm.get());
                }
                comp->realms().clear();
                js_delete(comp.get());
            }
            zone->compartments().clear();
            js_delete(zone.get());
        }
    }

    zones().clear();

    FreeChunkPool(fullChunks_.ref());
    FreeChunkPool(availableChunks_.ref());
    FreeChunkPool(emptyChunks_.ref());

    gcTracer.finishTrace();

    nursery().printTotalProfileTimes();
    stats().printTotalProfileTimes();
}

bool
GCRuntime::setParameter(JSGCParamKey key, uint32_t value, AutoLockGC& lock)
{
    switch (key) {
      case JSGC_MAX_MALLOC_BYTES:
        setMaxMallocBytes(value, lock);
        break;
      case JSGC_SLICE_TIME_BUDGET:
        defaultTimeBudget_ = value ? value : SliceBudget::UnlimitedTimeBudget;
        break;
      case JSGC_MARK_STACK_LIMIT:
        if (value == 0) {
            return false;
        }
        setMarkStackLimit(value, lock);
        break;
      case JSGC_MODE:
        if (mode != JSGC_MODE_GLOBAL &&
            mode != JSGC_MODE_ZONE &&
            mode != JSGC_MODE_INCREMENTAL)
        {
            return false;
        }
        mode = JSGCMode(value);
        break;
      case JSGC_COMPACTING_ENABLED:
        compactingEnabled = value != 0;
        break;
      default:
        if (!tunables.setParameter(key, value, lock)) {
            return false;
        }
        for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
            zone->threshold.updateAfterGC(zone->usage.gcBytes(), GC_NORMAL, tunables,
                                          schedulingState, lock);
        }
    }

    return true;
}

bool
GCSchedulingTunables::setParameter(JSGCParamKey key, uint32_t value, const AutoLockGC& lock)
{
    // Limit heap growth factor to one hundred times size of current heap.
    const float MaxHeapGrowthFactor = 100;

    switch(key) {
      case JSGC_MAX_BYTES:
        gcMaxBytes_ = value;
        break;
      case JSGC_MAX_NURSERY_BYTES:
        gcMaxNurseryBytes_ = value;
        break;
      case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
        highFrequencyThreshold_ = TimeDuration::FromMilliseconds(value);
        break;
      case JSGC_HIGH_FREQUENCY_LOW_LIMIT: {
        CheckedInt<size_t> newLimit = CheckedInt<size_t>(value) * 1024 * 1024;
        if (!newLimit.isValid()) {
            return false;
        }
        setHighFrequencyLowLimit(newLimit.value());
        break;
      }
      case JSGC_HIGH_FREQUENCY_HIGH_LIMIT: {
        size_t newLimit = (size_t)value * 1024 * 1024;
        if (newLimit == 0) {
            return false;
        }
        setHighFrequencyHighLimit(newLimit);
        break;
      }
      case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX: {
        float newGrowth = value / 100.0f;
        if (newGrowth < MinHighFrequencyHeapGrowthFactor || newGrowth > MaxHeapGrowthFactor) {
            return false;
        }
        setHighFrequencyHeapGrowthMax(newGrowth);
        break;
      }
      case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN: {
        float newGrowth = value / 100.0f;
        if (newGrowth < MinHighFrequencyHeapGrowthFactor || newGrowth > MaxHeapGrowthFactor) {
            return false;
        }
        setHighFrequencyHeapGrowthMin(newGrowth);
        break;
      }
      case JSGC_LOW_FREQUENCY_HEAP_GROWTH: {
        float newGrowth = value / 100.0f;
        if (newGrowth < MinLowFrequencyHeapGrowthFactor || newGrowth > MaxHeapGrowthFactor) {
            return false;
        }
        setLowFrequencyHeapGrowth(newGrowth);
        break;
      }
      case JSGC_DYNAMIC_HEAP_GROWTH:
        dynamicHeapGrowthEnabled_ = value != 0;
        break;
      case JSGC_DYNAMIC_MARK_SLICE:
        dynamicMarkSliceEnabled_ = value != 0;
        break;
      case JSGC_ALLOCATION_THRESHOLD:
        gcZoneAllocThresholdBase_ = value * 1024 * 1024;
        break;
      case JSGC_ALLOCATION_THRESHOLD_FACTOR: {
        float newFactor = value / 100.0f;
        if (newFactor < MinAllocationThresholdFactor || newFactor > 1.0f) {
            return false;
        }
        allocThresholdFactor_ = newFactor;
        break;
      }
      case JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT: {
        float newFactor = value / 100.0f;
        if (newFactor < MinAllocationThresholdFactor || newFactor > 1.0f) {
            return false;
        }
        allocThresholdFactorAvoidInterrupt_ = newFactor;
        break;
      }
      case JSGC_MIN_EMPTY_CHUNK_COUNT:
        setMinEmptyChunkCount(value);
        break;
      case JSGC_MAX_EMPTY_CHUNK_COUNT:
        setMaxEmptyChunkCount(value);
        break;
      case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
        if (value > gcMaxNurseryBytes()) {
            value = gcMaxNurseryBytes();
        }
        nurseryFreeThresholdForIdleCollection_ = value;
        break;
      default:
        MOZ_CRASH("Unknown GC parameter.");
    }

    return true;
}

void
GCSchedulingTunables::setMaxMallocBytes(size_t value)
{
    maxMallocBytes_ = std::min(value, TuningDefaults::MallocThresholdLimit);
}

void
GCSchedulingTunables::setHighFrequencyLowLimit(size_t newLimit)
{
    highFrequencyLowLimitBytes_ = newLimit;
    if (highFrequencyLowLimitBytes_ >= highFrequencyHighLimitBytes_) {
        highFrequencyHighLimitBytes_ = highFrequencyLowLimitBytes_ + 1;
    }
    MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
}

void
GCSchedulingTunables::setHighFrequencyHighLimit(size_t newLimit)
{
    highFrequencyHighLimitBytes_ = newLimit;
    if (highFrequencyHighLimitBytes_ <= highFrequencyLowLimitBytes_) {
        highFrequencyLowLimitBytes_ = highFrequencyHighLimitBytes_ - 1;
    }
    MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_);
}

void
GCSchedulingTunables::setHighFrequencyHeapGrowthMin(float value)
{
    highFrequencyHeapGrowthMin_ = value;
    if (highFrequencyHeapGrowthMin_ > highFrequencyHeapGrowthMax_) {
        highFrequencyHeapGrowthMax_ = highFrequencyHeapGrowthMin_;
    }
    MOZ_ASSERT(highFrequencyHeapGrowthMin_ >= MinHighFrequencyHeapGrowthFactor);
    MOZ_ASSERT(highFrequencyHeapGrowthMin_ <= highFrequencyHeapGrowthMax_);
}

void
GCSchedulingTunables::setHighFrequencyHeapGrowthMax(float value)
{
    highFrequencyHeapGrowthMax_ = value;
    if (highFrequencyHeapGrowthMax_ < highFrequencyHeapGrowthMin_) {
        highFrequencyHeapGrowthMin_ = highFrequencyHeapGrowthMax_;
    }
    MOZ_ASSERT(highFrequencyHeapGrowthMin_ >= MinHighFrequencyHeapGrowthFactor);
    MOZ_ASSERT(highFrequencyHeapGrowthMin_ <= highFrequencyHeapGrowthMax_);
}

void
GCSchedulingTunables::setLowFrequencyHeapGrowth(float value)
{
    lowFrequencyHeapGrowth_ = value;
    MOZ_ASSERT(lowFrequencyHeapGrowth_ >= MinLowFrequencyHeapGrowthFactor);
}

void
GCSchedulingTunables::setMinEmptyChunkCount(uint32_t value)
{
    minEmptyChunkCount_ = value;
    if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
        maxEmptyChunkCount_ = minEmptyChunkCount_;
    }
    MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
}

void
GCSchedulingTunables::setMaxEmptyChunkCount(uint32_t value)
{
    maxEmptyChunkCount_ = value;
    if (minEmptyChunkCount_ > maxEmptyChunkCount_) {
        minEmptyChunkCount_ = maxEmptyChunkCount_;
    }
    MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_);
}

GCSchedulingTunables::GCSchedulingTunables()
  : gcMaxBytes_(0),
    maxMallocBytes_(TuningDefaults::MaxMallocBytes),
    gcMaxNurseryBytes_(0),
    gcZoneAllocThresholdBase_(TuningDefaults::GCZoneAllocThresholdBase),
    allocThresholdFactor_(TuningDefaults::AllocThresholdFactor),
    allocThresholdFactorAvoidInterrupt_(TuningDefaults::AllocThresholdFactorAvoidInterrupt),
    zoneAllocDelayBytes_(TuningDefaults::ZoneAllocDelayBytes),
    dynamicHeapGrowthEnabled_(TuningDefaults::DynamicHeapGrowthEnabled),
    highFrequencyThreshold_(TimeDuration::FromSeconds(TuningDefaults::HighFrequencyThreshold)),
    highFrequencyLowLimitBytes_(TuningDefaults::HighFrequencyLowLimitBytes),
    highFrequencyHighLimitBytes_(TuningDefaults::HighFrequencyHighLimitBytes),
    highFrequencyHeapGrowthMax_(TuningDefaults::HighFrequencyHeapGrowthMax),
    highFrequencyHeapGrowthMin_(TuningDefaults::HighFrequencyHeapGrowthMin),
    lowFrequencyHeapGrowth_(TuningDefaults::LowFrequencyHeapGrowth),
    dynamicMarkSliceEnabled_(TuningDefaults::DynamicMarkSliceEnabled),
    minEmptyChunkCount_(TuningDefaults::MinEmptyChunkCount),
    maxEmptyChunkCount_(TuningDefaults::MaxEmptyChunkCount),
    nurseryFreeThresholdForIdleCollection_(TuningDefaults::NurseryFreeThresholdForIdleCollection)
{}

void
GCRuntime::resetParameter(JSGCParamKey key, AutoLockGC& lock)
{
    switch (key) {
      case JSGC_MAX_MALLOC_BYTES:
        setMaxMallocBytes(TuningDefaults::MaxMallocBytes, lock);
        break;
      case JSGC_SLICE_TIME_BUDGET:
        defaultTimeBudget_ = TuningDefaults::DefaultTimeBudget;
        break;
      case JSGC_MARK_STACK_LIMIT:
        setMarkStackLimit(MarkStack::DefaultCapacity, lock);
        break;
      case JSGC_MODE:
        mode = TuningDefaults::Mode;
        break;
      case JSGC_COMPACTING_ENABLED:
        compactingEnabled = TuningDefaults::CompactingEnabled;
        break;
      default:
        tunables.resetParameter(key, lock);
        for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
            zone->threshold.updateAfterGC(zone->usage.gcBytes(), GC_NORMAL,
                tunables, schedulingState, lock);
        }
    }
}

void
GCSchedulingTunables::resetParameter(JSGCParamKey key, const AutoLockGC& lock)
{
    switch(key) {
      case JSGC_MAX_BYTES:
        gcMaxBytes_ = 0xffffffff;
        break;
      case JSGC_MAX_NURSERY_BYTES:
        gcMaxNurseryBytes_ = JS::DefaultNurseryBytes;
        break;
      case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
        highFrequencyThreshold_ =
            TimeDuration::FromSeconds(TuningDefaults::HighFrequencyThreshold);
        break;
      case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
        setHighFrequencyLowLimit(TuningDefaults::HighFrequencyLowLimitBytes);
        break;
      case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
        setHighFrequencyHighLimit(TuningDefaults::HighFrequencyHighLimitBytes);
        break;
      case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
        setHighFrequencyHeapGrowthMax(TuningDefaults::HighFrequencyHeapGrowthMax);
        break;
      case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
        setHighFrequencyHeapGrowthMin(TuningDefaults::HighFrequencyHeapGrowthMin);
        break;
      case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
        setLowFrequencyHeapGrowth(TuningDefaults::LowFrequencyHeapGrowth);
        break;
      case JSGC_DYNAMIC_HEAP_GROWTH:
        dynamicHeapGrowthEnabled_ = TuningDefaults::DynamicHeapGrowthEnabled;
        break;
      case JSGC_DYNAMIC_MARK_SLICE:
        dynamicMarkSliceEnabled_ = TuningDefaults::DynamicMarkSliceEnabled;
        break;
      case JSGC_ALLOCATION_THRESHOLD:
        gcZoneAllocThresholdBase_ = TuningDefaults::GCZoneAllocThresholdBase;
        break;
      case JSGC_ALLOCATION_THRESHOLD_FACTOR:
        allocThresholdFactor_ = TuningDefaults::AllocThresholdFactor;
        break;
      case JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT:
        allocThresholdFactorAvoidInterrupt_ = TuningDefaults::AllocThresholdFactorAvoidInterrupt;
        break;
      case JSGC_MIN_EMPTY_CHUNK_COUNT:
        setMinEmptyChunkCount(TuningDefaults::MinEmptyChunkCount);
        break;
      case JSGC_MAX_EMPTY_CHUNK_COUNT:
        setMaxEmptyChunkCount(TuningDefaults::MaxEmptyChunkCount);
        break;
      case JSGC_NURSERY_FREE_THRESHOLD_FOR_IDLE_COLLECTION:
        nurseryFreeThresholdForIdleCollection_ =
            TuningDefaults::NurseryFreeThresholdForIdleCollection;
        break;
      default:
        MOZ_CRASH("Unknown GC parameter.");
    }
}

uint32_t
GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock)
{
    switch (key) {
      case JSGC_MAX_BYTES:
        return uint32_t(tunables.gcMaxBytes());
      case JSGC_MAX_MALLOC_BYTES:
        return mallocCounter.maxBytes();
      case JSGC_BYTES:
        return uint32_t(usage.gcBytes());
      case JSGC_MODE:
        return uint32_t(mode);
      case JSGC_UNUSED_CHUNKS:
        return uint32_t(emptyChunks(lock).count());
      case JSGC_TOTAL_CHUNKS:
        return uint32_t(fullChunks(lock).count() +
                        availableChunks(lock).count() +
                        emptyChunks(lock).count());
      case JSGC_SLICE_TIME_BUDGET:
        if (defaultTimeBudget_.ref() == SliceBudget::UnlimitedTimeBudget) {
            return 0;
        } else {
            MOZ_RELEASE_ASSERT(defaultTimeBudget_ >= 0);
            MOZ_RELEASE_ASSERT(defaultTimeBudget_ <= UINT32_MAX);
            return uint32_t(defaultTimeBudget_);
        }
      case JSGC_MARK_STACK_LIMIT:
        return marker.maxCapacity();
      case JSGC_HIGH_FREQUENCY_TIME_LIMIT:
        return tunables.highFrequencyThreshold().ToMilliseconds();
      case JSGC_HIGH_FREQUENCY_LOW_LIMIT:
        return tunables.highFrequencyLowLimitBytes() / 1024 / 1024;
      case JSGC_HIGH_FREQUENCY_HIGH_LIMIT:
        return tunables.highFrequencyHighLimitBytes() / 1024 / 1024;
      case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX:
        return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100);
      case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN:
        return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100);
      case JSGC_LOW_FREQUENCY_HEAP_GROWTH:
        return uint32_t(tunables.lowFrequencyHeapGrowth() * 100);
      case JSGC_DYNAMIC_HEAP_GROWTH:
        return tunables.isDynamicHeapGrowthEnabled();
      case JSGC_DYNAMIC_MARK_SLICE:
        return tunables.isDynamicMarkSliceEnabled();
      case JSGC_ALLOCATION_THRESHOLD:
        return tunables.gcZoneAllocThresholdBase() / 1024 / 1024;
      case JSGC_ALLOCATION_THRESHOLD_FACTOR:
        return uint32_t(tunables.allocThresholdFactor() * 100);
      case JSGC_ALLOCATION_THRESHOLD_FACTOR_AVOID_INTERRUPT:
        return uint32_t(tunables.allocThresholdFactorAvoidInterrupt() * 100);
      case JSGC_MIN_EMPTY_CHUNK_COUNT:
        return tunables.minEmptyChunkCount(lock);
      case JSGC_MAX_EMPTY_CHUNK_COUNT:
        return tunables.maxEmptyChunkCount();
      case JSGC_COMPACTING_ENABLED:
        return compactingEnabled;
      default:
        MOZ_ASSERT(key == JSGC_NUMBER);
        return uint32_t(number);
    }
}

void
GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock)
{
    MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
    AutoUnlockGC unlock(lock);
    AutoStopVerifyingBarriers pauseVerification(rt, false);
    marker.setMaxCapacity(limit);
}

bool
GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data)
{
    AssertHeapIsIdle();
    return !!blackRootTracers.ref().append(Callback<JSTraceDataOp>(traceOp, data));
}

void
GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data)
{
    // Can be called from finalizers
    for (size_t i = 0; i < blackRootTracers.ref().length(); i++) {
        Callback<JSTraceDataOp>* e = &blackRootTracers.ref()[i];
        if (e->op == traceOp && e->data == data) {
            blackRootTracers.ref().erase(e);
        }
    }
}

void
GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data)
{
    AssertHeapIsIdle();
    grayRootTracer.op = traceOp;
    grayRootTracer.data = data;
}

void
GCRuntime::setGCCallback(JSGCCallback callback, void* data)
{
    gcCallback.op = callback;
    gcCallback.data = data;
}

void
GCRuntime::callGCCallback(JSGCStatus status) const
{
    MOZ_ASSERT(gcCallback.op);
    gcCallback.op(rt->mainContextFromOwnThread(), status, gcCallback.data);
}

void
GCRuntime::setObjectsTenuredCallback(JSObjectsTenuredCallback callback,
                                     void* data)
{
    tenuredCallback.op = callback;
    tenuredCallback.data = data;
}

void
GCRuntime::callObjectsTenuredCallback()
{
    JS::AutoSuppressGCAnalysis nogc;
    if (tenuredCallback.op) {
        tenuredCallback.op(rt->mainContextFromOwnThread(), tenuredCallback.data);
    }
}

bool
GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data)
{
    return finalizeCallbacks.ref().append(Callback<JSFinalizeCallback>(callback, data));
}

void
GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback)
{
    for (Callback<JSFinalizeCallback>* p = finalizeCallbacks.ref().begin();
         p < finalizeCallbacks.ref().end(); p++)
    {
        if (p->op == callback) {
            finalizeCallbacks.ref().erase(p);
            break;
        }
    }
}

void
GCRuntime::callFinalizeCallbacks(FreeOp* fop, JSFinalizeStatus status) const
{
    for (auto& p : finalizeCallbacks.ref()) {
        p.op(fop, status, p.data);
    }
}

bool
GCRuntime::addWeakPointerZonesCallback(JSWeakPointerZonesCallback callback, void* data)
{
    return updateWeakPointerZonesCallbacks.ref().append(
            Callback<JSWeakPointerZonesCallback>(callback, data));
}

void
GCRuntime::removeWeakPointerZonesCallback(JSWeakPointerZonesCallback callback)
{
    for (auto& p : updateWeakPointerZonesCallbacks.ref()) {
        if (p.op == callback) {
            updateWeakPointerZonesCallbacks.ref().erase(&p);
            break;
        }
    }
}

void
GCRuntime::callWeakPointerZonesCallbacks() const
{
    JSContext* cx = rt->mainContextFromOwnThread();
    for (auto const& p : updateWeakPointerZonesCallbacks.ref()) {
        p.op(cx, p.data);
    }
}

bool
GCRuntime::addWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback, void* data)
{
    return updateWeakPointerCompartmentCallbacks.ref().append(
            Callback<JSWeakPointerCompartmentCallback>(callback, data));
}

void
GCRuntime::removeWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback)
{
    for (auto& p : updateWeakPointerCompartmentCallbacks.ref()) {
        if (p.op == callback) {
            updateWeakPointerCompartmentCallbacks.ref().erase(&p);
            break;
        }
    }
}

void
GCRuntime::callWeakPointerCompartmentCallbacks(JS::Compartment* comp) const
{
    JSContext* cx = rt->mainContextFromOwnThread();
    for (auto const& p : updateWeakPointerCompartmentCallbacks.ref()) {
        p.op(cx, comp, p.data);
    }
}

JS::GCSliceCallback
GCRuntime::setSliceCallback(JS::GCSliceCallback callback) {
    return stats().setSliceCallback(callback);
}

JS::GCNurseryCollectionCallback
GCRuntime::setNurseryCollectionCallback(JS::GCNurseryCollectionCallback callback) {
    return stats().setNurseryCollectionCallback(callback);
}

JS::DoCycleCollectionCallback
GCRuntime::setDoCycleCollectionCallback(JS::DoCycleCollectionCallback callback)
{
    auto prior = gcDoCycleCollectionCallback;
    gcDoCycleCollectionCallback = Callback<JS::DoCycleCollectionCallback>(callback, nullptr);
    return prior.op;
}

void
GCRuntime::callDoCycleCollectionCallback(JSContext* cx)
{
    if (gcDoCycleCollectionCallback.op) {
        gcDoCycleCollectionCallback.op(cx);
    }
}

bool
GCRuntime::addRoot(Value* vp, const char* name)
{
    /*
     * Sometimes Firefox will hold weak references to objects and then convert
     * them to strong references by calling AddRoot (e.g., via PreserveWrapper,
     * or ModifyBusyCount in workers). We need a read barrier to cover these
     * cases.
     */
    if (isIncrementalGCInProgress()) {
        GCPtrValue::writeBarrierPre(*vp);
    }

    return rootsHash.ref().put(vp, name);
}

void
GCRuntime::removeRoot(Value* vp)
{
    rootsHash.ref().remove(vp);
    notifyRootsRemoved();
}

extern JS_FRIEND_API bool
js::AddRawValueRoot(JSContext* cx, Value* vp, const char* name)
{
    MOZ_ASSERT(vp);
    MOZ_ASSERT(name);
    bool ok = cx->runtime()->gc.addRoot(vp, name);
    if (!ok) {
        JS_ReportOutOfMemory(cx);
    }
    return ok;
}

extern JS_FRIEND_API void
js::RemoveRawValueRoot(JSContext* cx, Value* vp)
{
    cx->runtime()->gc.removeRoot(vp);
}

void
GCRuntime::setMaxMallocBytes(size_t value, const AutoLockGC& lock)
{
    tunables.setMaxMallocBytes(value);
    mallocCounter.setMax(value, lock);
    for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        zone->setGCMaxMallocBytes(value, lock);
    }
}

float
ZoneHeapThreshold::eagerAllocTrigger(bool highFrequencyGC) const
{
    float eagerTriggerFactor = highFrequencyGC ? HighFrequencyEagerAllocTriggerFactor
                                               : LowFrequencyEagerAllocTriggerFactor;
    return eagerTriggerFactor * gcTriggerBytes();
}

/* static */ float
ZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_t lastBytes,
                                                          const GCSchedulingTunables& tunables,
                                                          const GCSchedulingState& state)
{
    if (!tunables.isDynamicHeapGrowthEnabled()) {
        return 3.0f;
    }

    // For small zones, our collection heuristics do not matter much: favor
    // something simple in this case.
    if (lastBytes < 1 * 1024 * 1024) {
        return tunables.lowFrequencyHeapGrowth();
    }

    // If GC's are not triggering in rapid succession, use a lower threshold so
    // that we will collect garbage sooner.
    if (!state.inHighFrequencyGCMode()) {
        return tunables.lowFrequencyHeapGrowth();
    }

    // The heap growth factor depends on the heap size after a GC and the GC
    // frequency. For low frequency GCs (more than 1sec between GCs) we let
    // the heap grow to 150%. For high frequency GCs we let the heap grow
    // depending on the heap size:
    //   lastBytes < highFrequencyLowLimit: 300%
    //   lastBytes > highFrequencyHighLimit: 150%
    //   otherwise: linear interpolation between 300% and 150% based on lastBytes

    float minRatio = tunables.highFrequencyHeapGrowthMin();
    float maxRatio = tunables.highFrequencyHeapGrowthMax();
    size_t lowLimit = tunables.highFrequencyLowLimitBytes();
    size_t highLimit = tunables.highFrequencyHighLimitBytes();

    MOZ_ASSERT(minRatio <= maxRatio);
    MOZ_ASSERT(lowLimit < highLimit);

    if (lastBytes <= lowLimit) {
        return maxRatio;
    }

    if (lastBytes >= highLimit) {
        return minRatio;
    }

    float factor = maxRatio - ((maxRatio - minRatio) * ((lastBytes - lowLimit) /
                                                         (highLimit - lowLimit)));

    MOZ_ASSERT(factor >= minRatio);
    MOZ_ASSERT(factor <= maxRatio);
    return factor;
}

/* static */ size_t
ZoneHeapThreshold::computeZoneTriggerBytes(float growthFactor, size_t lastBytes,
                                           JSGCInvocationKind gckind,
                                           const GCSchedulingTunables& tunables,
                                           const AutoLockGC& lock)
{
    size_t base = gckind == GC_SHRINK
                ? Max(lastBytes, tunables.minEmptyChunkCount(lock) * ChunkSize)
                : Max(lastBytes, tunables.gcZoneAllocThresholdBase());
    float trigger = float(base) * growthFactor;
    return size_t(Min(float(tunables.gcMaxBytes()), trigger));
}

void
ZoneHeapThreshold::updateAfterGC(size_t lastBytes, JSGCInvocationKind gckind,
                                 const GCSchedulingTunables& tunables,
                                 const GCSchedulingState& state, const AutoLockGC& lock)
{
    gcHeapGrowthFactor_ = computeZoneHeapGrowthFactorForHeapSize(lastBytes, tunables, state);
    gcTriggerBytes_ = computeZoneTriggerBytes(gcHeapGrowthFactor_, lastBytes, gckind, tunables,
                                              lock);
}

void
ZoneHeapThreshold::updateForRemovedArena(const GCSchedulingTunables& tunables)
{
    size_t amount = ArenaSize * gcHeapGrowthFactor_;
    MOZ_ASSERT(amount > 0);

    if ((gcTriggerBytes_ < amount) ||
        (gcTriggerBytes_ - amount < tunables.gcZoneAllocThresholdBase() * gcHeapGrowthFactor_))
    {
        return;
    }

    gcTriggerBytes_ -= amount;
}

MemoryCounter::MemoryCounter()
  : bytes_(0),
    maxBytes_(0),
    triggered_(NoTrigger)
{}

void
MemoryCounter::updateOnGCStart()
{
    // Record the current byte count at the start of GC.
    bytesAtStartOfGC_ = bytes_;
}

void
MemoryCounter::updateOnGCEnd(const GCSchedulingTunables& tunables, const AutoLockGC& lock)
{
    // Update the trigger threshold at the end of GC and adjust the current
    // byte count to reflect bytes allocated since the start of GC.
    MOZ_ASSERT(bytes_ >= bytesAtStartOfGC_);
    if (shouldTriggerGC(tunables)) {
        maxBytes_ = std::min(TuningDefaults::MallocThresholdLimit,
                             size_t(maxBytes_ * TuningDefaults::MallocThresholdGrowFactor));
    } else {
        maxBytes_ = std::max(tunables.maxMallocBytes(),
                             size_t(maxBytes_ * TuningDefaults::MallocThresholdShrinkFactor));
    }
    bytes_ -= bytesAtStartOfGC_;
    triggered_ = NoTrigger;
}

void
MemoryCounter::setMax(size_t newMax, const AutoLockGC& lock)
{
    maxBytes_ = newMax;
}

void
MemoryCounter::adopt(MemoryCounter& other)
{
    update(other.bytes());
    other.bytes_ = 0;
    other.triggered_ = NoTrigger;
}

void
MemoryCounter::recordTrigger(TriggerKind trigger)
{
    MOZ_ASSERT(trigger > triggered_);
    triggered_ = trigger;
}

void
GCMarker::delayMarkingArena(Arena* arena)
{
    if (arena->hasDelayedMarking) {
        /* Arena already scheduled to be marked later */
        return;
    }
    arena->setNextDelayedMarking(unmarkedArenaStackTop);
    unmarkedArenaStackTop = arena;
#ifdef DEBUG
    markLaterArenas++;
#endif
}

void
GCMarker::delayMarkingChildren(const void* thing)
{
    const TenuredCell* cell = TenuredCell::fromPointer(thing);
    cell->arena()->markOverflow = 1;
    delayMarkingArena(cell->arena());
}

/* Compacting GC */

bool
GCRuntime::shouldCompact()
{
    // Compact on shrinking GC if enabled.  Skip compacting in incremental GCs
    // if we are currently animating, unless the user is inactive or we're
    // responding to memory pressure.

    static const auto oneSecond = TimeDuration::FromSeconds(1);

    if (invocationKind != GC_SHRINK || !isCompactingGCEnabled()) {
        return false;
    }

    if (initialReason == JS::gcreason::USER_INACTIVE ||
        initialReason == JS::gcreason::MEM_PRESSURE)
    {
        return true;
    }

    const auto &lastAnimationTime = rt->lastAnimationTime.ref();
    return !isIncremental
        || lastAnimationTime.IsNull()
        || lastAnimationTime + oneSecond < TimeStamp::Now();
}

bool
GCRuntime::isCompactingGCEnabled() const
{
    return compactingEnabled
        && rt->mainContextFromOwnThread()->compactingDisabledCount == 0
        && !mozilla::recordreplay::IsRecordingOrReplaying();
}

AutoDisableCompactingGC::AutoDisableCompactingGC(JSContext* cx)
  : cx(cx)
{
    ++cx->compactingDisabledCount;
    if (cx->runtime()->gc.isIncrementalGCInProgress() && cx->runtime()->gc.isCompactingGc()) {
        FinishGC(cx);
    }
}

AutoDisableCompactingGC::~AutoDisableCompactingGC()
{
    MOZ_ASSERT(cx->compactingDisabledCount > 0);
    --cx->compactingDisabledCount;
}

static bool
CanRelocateZone(Zone* zone)
{
    return !zone->isAtomsZone() && !zone->isSelfHostingZone();
}

Arena*
ArenaList::removeRemainingArenas(Arena** arenap)
{
    // This is only ever called to remove arenas that are after the cursor, so
    // we don't need to update it.
#ifdef DEBUG
    for (Arena* arena = *arenap; arena; arena = arena->next) {
        MOZ_ASSERT(cursorp_ != &arena->next);
    }
#endif
    Arena* remainingArenas = *arenap;
    *arenap = nullptr;
    check();
    return remainingArenas;
}

static bool
ShouldRelocateAllArenas(JS::gcreason::Reason reason)
{
    return reason == JS::gcreason::DEBUG_GC;
}

/*
 * Choose which arenas to relocate all cells from. Return an arena cursor that
 * can be passed to removeRemainingArenas().
 */
Arena**
ArenaList::pickArenasToRelocate(size_t& arenaTotalOut, size_t& relocTotalOut)
{
    // Relocate the greatest number of arenas such that the number of used cells
    // in relocated arenas is less than or equal to the number of free cells in
    // unrelocated arenas. In other words we only relocate cells we can move
    // into existing arenas, and we choose the least full areans to relocate.
    //
    // This is made easier by the fact that the arena list has been sorted in
    // descending order of number of used cells, so we will always relocate a
    // tail of the arena list. All we need to do is find the point at which to
    // start relocating.

    check();

    if (isCursorAtEnd()) {
        return nullptr;
    }

    Arena** arenap = cursorp_;     // Next arena to consider for relocation.
    size_t previousFreeCells = 0;  // Count of free cells before arenap.
    size_t followingUsedCells = 0; // Count of used cells after arenap.
    size_t fullArenaCount = 0;     // Number of full arenas (not relocated).
    size_t nonFullArenaCount = 0;  // Number of non-full arenas (considered for relocation).
    size_t arenaIndex = 0;         // Index of the next arena to consider.

    for (Arena* arena = head_; arena != *cursorp_; arena = arena->next) {
        fullArenaCount++;
    }

    for (Arena* arena = *cursorp_; arena; arena = arena->next) {
        followingUsedCells += arena->countUsedCells();
        nonFullArenaCount++;
    }

    mozilla::DebugOnly<size_t> lastFreeCells(0);
    size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getAllocKind());

    while (*arenap) {
        Arena* arena = *arenap;
        if (followingUsedCells <= previousFreeCells) {
            break;
        }

        size_t freeCells = arena->countFreeCells();
        size_t usedCells = cellsPerArena - freeCells;
        followingUsedCells -= usedCells;
#ifdef DEBUG
        MOZ_ASSERT(freeCells >= lastFreeCells);
        lastFreeCells = freeCells;
#endif
        previousFreeCells += freeCells;
        arenap = &arena->next;
        arenaIndex++;
    }

    size_t relocCount = nonFullArenaCount - arenaIndex;
    MOZ_ASSERT(relocCount < nonFullArenaCount);
    MOZ_ASSERT((relocCount == 0) == (!*arenap));
    arenaTotalOut += fullArenaCount + nonFullArenaCount;
    relocTotalOut += relocCount;

    return arenap;
}

#ifdef DEBUG
inline bool
PtrIsInRange(const void* ptr, const void* start, size_t length)
{
    return uintptr_t(ptr) - uintptr_t(start) < length;
}
#endif

static void
RelocateCell(Zone* zone, TenuredCell* src, AllocKind thingKind, size_t thingSize)
{
    JS::AutoSuppressGCAnalysis nogc(TlsContext.get());

    // Allocate a new cell.
    MOZ_ASSERT(zone == src->zone());
    TenuredCell* dst = AllocateCellInGC(zone, thingKind);

    // Copy source cell contents to destination.
    memcpy(dst, src, thingSize);

    // Move any uid attached to the object.
    src->zone()->transferUniqueId(dst, src);

    if (IsObjectAllocKind(thingKind)) {
        JSObject* srcObj = static_cast<JSObject*>(static_cast<Cell*>(src));
        JSObject* dstObj = static_cast<JSObject*>(static_cast<Cell*>(dst));

        if (srcObj->isNative()) {
            NativeObject* srcNative = &srcObj->as<NativeObject>();
            NativeObject* dstNative = &dstObj->as<NativeObject>();

            // Fixup the pointer to inline object elements if necessary.
            if (srcNative->hasFixedElements()) {
                uint32_t numShifted = srcNative->getElementsHeader()->numShiftedElements();
                dstNative->setFixedElements(numShifted);
            }

            // For copy-on-write objects that own their elements, fix up the
            // owner pointer to point to the relocated object.
            if (srcNative->denseElementsAreCopyOnWrite()) {
                GCPtrNativeObject& owner = dstNative->getElementsHeader()->ownerObject();
                if (owner == srcNative) {
                    owner = dstNative;
                }
            }
        } else if (srcObj->is<ProxyObject>()) {
            if (srcObj->as<ProxyObject>().usingInlineValueArray()) {
                dstObj->as<ProxyObject>().setInlineValueArray();
            }
        }

        // Call object moved hook if present.
        if (JSObjectMovedOp op = srcObj->getClass()->extObjectMovedOp()) {
            op(dstObj, srcObj);
        }

        MOZ_ASSERT_IF(dstObj->isNative(),
                      !PtrIsInRange((const Value*)dstObj->as<NativeObject>().getDenseElements(),
                                    src, thingSize));
    }

    // Copy the mark bits.
    dst->copyMarkBitsFrom(src);

    // Mark source cell as forwarded and leave a pointer to the destination.
    RelocationOverlay* overlay = RelocationOverlay::fromCell(src);
    overlay->forwardTo(dst);
}

static void
RelocateArena(Arena* arena, SliceBudget& sliceBudget)
{
    MOZ_ASSERT(arena->allocated());
    MOZ_ASSERT(!arena->hasDelayedMarking);
    MOZ_ASSERT(!arena->markOverflow);
    MOZ_ASSERT(arena->bufferedCells()->isEmpty());

    Zone* zone = arena->zone;

    AllocKind thingKind = arena->getAllocKind();
    size_t thingSize = arena->getThingSize();

    for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
        RelocateCell(zone, i.getCell(), thingKind, thingSize);
        sliceBudget.step();
    }

#ifdef DEBUG
    for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
        TenuredCell* src = i.getCell();
        MOZ_ASSERT(src->isForwarded());
        TenuredCell* dest = Forwarded(src);
        MOZ_ASSERT(src->isMarkedBlack() == dest->isMarkedBlack());
        MOZ_ASSERT(src->isMarkedGray() == dest->isMarkedGray());
    }
#endif
}

static inline bool
CanProtectArenas()
{
    // On some systems the page size is larger than the size of an arena so we
    // can't change the mapping permissions per arena.
    return SystemPageSize() <= ArenaSize;
}

static inline bool
ShouldProtectRelocatedArenas(JS::gcreason::Reason reason)
{
    // For zeal mode collections we don't release the relocated arenas
    // immediately. Instead we protect them and keep them around until the next
    // collection so we can catch any stray accesses to them.
#ifdef DEBUG
    return reason == JS::gcreason::DEBUG_GC && CanProtectArenas();
#else
    return false;
#endif
}

/*
 * Relocate all arenas identified by pickArenasToRelocate: for each arena,
 * relocate each cell within it, then add it to a list of relocated arenas.
 */
Arena*
ArenaList::relocateArenas(Arena* toRelocate, Arena* relocated, SliceBudget& sliceBudget,
                          gcstats::Statistics& stats)
{
    check();

    while (Arena* arena = toRelocate) {
        toRelocate = arena->next;
        RelocateArena(arena, sliceBudget);
        // Prepend to list of relocated arenas
        arena->next = relocated;
        relocated = arena;
        stats.count(gcstats::COUNT_ARENA_RELOCATED);
    }

    check();

    return relocated;
}

// Skip compacting zones unless we can free a certain proportion of their GC
// heap memory.
static const float MIN_ZONE_RECLAIM_PERCENT = 2.0;

static bool
ShouldRelocateZone(size_t arenaCount, size_t relocCount, JS::gcreason::Reason reason)
{
    if (relocCount == 0) {
        return false;
    }

    if (IsOOMReason(reason)) {
        return true;
    }

    return (relocCount * 100.0f) / arenaCount >= MIN_ZONE_RECLAIM_PERCENT;
}

static AllocKinds
CompactingAllocKinds()
{
    AllocKinds result;
    for (AllocKind kind : AllAllocKinds()) {
        if (IsCompactingKind(kind)) {
            result += kind;
        }
    }
    return result;
}

bool
ArenaLists::relocateArenas(Arena*& relocatedListOut, JS::gcreason::Reason reason,
                           SliceBudget& sliceBudget, gcstats::Statistics& stats)
{
    // This is only called from the main thread while we are doing a GC, so
    // there is no need to lock.
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    MOZ_ASSERT(runtime()->gc.isHeapCompacting());
    MOZ_ASSERT(!runtime()->gc.isBackgroundSweeping());

    // Relocate all compatible kinds
    AllocKinds allocKindsToRelocate = CompactingAllocKinds();

    // Clear all the free lists.
    clearFreeLists();

    if (ShouldRelocateAllArenas(reason)) {
        zone_->prepareForCompacting();
        for (auto kind : allocKindsToRelocate) {
            ArenaList& al = arenaLists(kind);
            Arena* allArenas = al.head();
            al.clear();
            relocatedListOut = al.relocateArenas(allArenas, relocatedListOut, sliceBudget, stats);
        }
    } else {
        size_t arenaCount = 0;
        size_t relocCount = 0;
        AllAllocKindArray<Arena**> toRelocate;

        for (auto kind : allocKindsToRelocate) {
            toRelocate[kind] = arenaLists(kind).pickArenasToRelocate(arenaCount, relocCount);
        }

        if (!ShouldRelocateZone(arenaCount, relocCount, reason)) {
            return false;
        }

        zone_->prepareForCompacting();
        for (auto kind : allocKindsToRelocate) {
            if (toRelocate[kind]) {
                ArenaList& al = arenaLists(kind);
                Arena* arenas = al.removeRemainingArenas(toRelocate[kind]);
                relocatedListOut = al.relocateArenas(arenas, relocatedListOut, sliceBudget, stats);
            }
        }
    }

    return true;
}

bool
GCRuntime::relocateArenas(Zone* zone, JS::gcreason::Reason reason, Arena*& relocatedListOut,
                          SliceBudget& sliceBudget)
{
    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_MOVE);

    MOZ_ASSERT(!zone->isPreservingCode());
    MOZ_ASSERT(CanRelocateZone(zone));

    js::CancelOffThreadIonCompile(rt, JS::Zone::Compact);

    if (!zone->arenas.relocateArenas(relocatedListOut, reason, sliceBudget, stats())) {
        return false;
    }

#ifdef DEBUG
    // Check that we did as much compaction as we should have. There
    // should always be less than one arena's worth of free cells.
    for (auto kind : CompactingAllocKinds()) {
        ArenaList& al = zone->arenas.arenaLists(kind);
        size_t freeCells = 0;
        for (Arena* arena = al.arenaAfterCursor(); arena; arena = arena->next) {
            freeCells += arena->countFreeCells();
        }
        MOZ_ASSERT(freeCells < Arena::thingsPerArena(kind));
    }
#endif

    return true;
}

template <typename T>
inline void
MovingTracer::updateEdge(T** thingp)
{
    auto thing = *thingp;
    if (thing->runtimeFromAnyThread() == runtime() && IsForwarded(thing)) {
        *thingp = Forwarded(thing);
    }
}

void MovingTracer::onObjectEdge(JSObject** objp) { updateEdge(objp); }
void MovingTracer::onShapeEdge(Shape** shapep) { updateEdge(shapep); }
void MovingTracer::onStringEdge(JSString** stringp) { updateEdge(stringp); }
void MovingTracer::onScriptEdge(JSScript** scriptp) { updateEdge(scriptp); }
void MovingTracer::onLazyScriptEdge(LazyScript** lazyp) { updateEdge(lazyp); }
void MovingTracer::onBaseShapeEdge(BaseShape** basep) { updateEdge(basep); }
void MovingTracer::onScopeEdge(Scope** scopep) { updateEdge(scopep); }
void MovingTracer::onRegExpSharedEdge(RegExpShared** sharedp) { updateEdge(sharedp); }

void
Zone::prepareForCompacting()
{
    FreeOp* fop = runtimeFromMainThread()->defaultFreeOp();
    discardJitCode(fop);
}

void
GCRuntime::sweepTypesAfterCompacting(Zone* zone)
{
    zone->beginSweepTypes();

    AutoClearTypeInferenceStateOnOOM oom(zone);

    for (auto script = zone->cellIter<JSScript>(); !script.done(); script.next()) {
        AutoSweepTypeScript sweep(script);
    }
    for (auto group = zone->cellIter<ObjectGroup>(); !group.done(); group.next()) {
        AutoSweepObjectGroup sweep(group);
    }

    zone->types.endSweep(rt);
}

void
GCRuntime::sweepZoneAfterCompacting(Zone* zone)
{
    MOZ_ASSERT(zone->isCollecting());
    FreeOp* fop = rt->defaultFreeOp();
    sweepTypesAfterCompacting(zone);
    zone->sweepBreakpoints(fop);
    zone->sweepWeakMaps();
    for (auto* cache : zone->weakCaches()) {
        cache->sweep();
    }

    if (jit::JitZone* jitZone = zone->jitZone()) {
        jitZone->sweep();
    }

    for (RealmsInZoneIter r(zone); !r.done(); r.next()) {
        r->sweepObjectGroups();
        r->sweepRegExps();
        r->sweepSavedStacks();
        r->sweepVarNames();
        r->sweepGlobalObject();
        r->sweepSelfHostingScriptSource();
        r->sweepDebugEnvironments();
        r->sweepJitRealm();
        r->sweepObjectRealm();
        r->sweepTemplateObjects();
    }
}

template <typename T>
static inline void
UpdateCellPointers(MovingTracer* trc, T* cell)
{
    cell->fixupAfterMovingGC();
    cell->traceChildren(trc);
}

template <typename T>
static void
UpdateArenaPointersTyped(MovingTracer* trc, Arena* arena)
{
    for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
        UpdateCellPointers(trc, reinterpret_cast<T*>(i.getCell()));
    }
}

/*
 * Update the internal pointers for all cells in an arena.
 */
static void
UpdateArenaPointers(MovingTracer* trc, Arena* arena)
{
    AllocKind kind = arena->getAllocKind();

    switch (kind) {
#define EXPAND_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
      case AllocKind::allocKind: \
        UpdateArenaPointersTyped<type>(trc, arena); \
        return;
FOR_EACH_ALLOCKIND(EXPAND_CASE)
#undef EXPAND_CASE

      default:
        MOZ_CRASH("Invalid alloc kind for UpdateArenaPointers");
    }
}

namespace js {
namespace gc {

struct ArenaListSegment
{
    Arena* begin;
    Arena* end;
};

struct ArenasToUpdate
{
    ArenasToUpdate(Zone* zone, AllocKinds kinds);
    bool done() { return kind == AllocKind::LIMIT; }
    ArenaListSegment getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned maxLength);

  private:
    AllocKinds kinds;  // Selects which thing kinds to update
    Zone* zone;        // Zone to process
    AllocKind kind;    // Current alloc kind to process
    Arena* arena;      // Next arena to process

    AllocKind nextAllocKind(AllocKind i) { return AllocKind(uint8_t(i) + 1); }
    bool shouldProcessKind(AllocKind kind);
    Arena* next(AutoLockHelperThreadState& lock);
};

ArenasToUpdate::ArenasToUpdate(Zone* zone, AllocKinds kinds)
  : kinds(kinds), zone(zone), kind(AllocKind::FIRST), arena(nullptr)
{
    MOZ_ASSERT(zone->isGCCompacting());
}

Arena*
ArenasToUpdate::next(AutoLockHelperThreadState& lock)
{
    // Find the next arena to update.
    //
    // This iterates through the GC thing kinds filtered by shouldProcessKind(),
    // and then through thea arenas of that kind.  All state is held in the
    // object and we just return when we find an arena.

    for (; kind < AllocKind::LIMIT; kind = nextAllocKind(kind)) {
        if (kinds.contains(kind)) {
            if (!arena) {
                arena = zone->arenas.getFirstArena(kind);
            } else {
                arena = arena->next;
            }
            if (arena) {
                return arena;
            }
        }
    }

    MOZ_ASSERT(!arena);
    MOZ_ASSERT(done());
    return nullptr;
}

ArenaListSegment
ArenasToUpdate::getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned maxLength)
{
    Arena* begin = next(lock);
    if (!begin) {
        return { nullptr, nullptr };
    }

    Arena* last = begin;
    unsigned count = 1;
    while (last->next && count < maxLength) {
        last = last->next;
        count++;
    }

    arena = last;
    return { begin, last->next };
}

struct UpdatePointersTask : public GCParallelTaskHelper<UpdatePointersTask>
{
    // Maximum number of arenas to update in one block.
#ifdef DEBUG
    static const unsigned MaxArenasToProcess = 16;
#else
    static const unsigned MaxArenasToProcess = 256;
#endif

    UpdatePointersTask(JSRuntime* rt, ArenasToUpdate* source, AutoLockHelperThreadState& lock)
      : GCParallelTaskHelper(rt), source_(source)
    {
        arenas_.begin = nullptr;
        arenas_.end = nullptr;
    }

    void run();

  private:
    ArenasToUpdate* source_;
    ArenaListSegment arenas_;

    bool getArenasToUpdate();
    void updateArenas();
};

bool
UpdatePointersTask::getArenasToUpdate()
{
    AutoLockHelperThreadState lock;
    arenas_ = source_->getArenasToUpdate(lock, MaxArenasToProcess);
    return arenas_.begin != nullptr;
}

void
UpdatePointersTask::updateArenas()
{
    MovingTracer trc(runtime());
    for (Arena* arena = arenas_.begin; arena != arenas_.end; arena = arena->next) {
        UpdateArenaPointers(&trc, arena);
    }
}

void
UpdatePointersTask::run()
{
    // These checks assert when run in parallel.
    AutoDisableProxyCheck noProxyCheck;

    while (getArenasToUpdate()) {
        updateArenas();
    }
}

} // namespace gc
} // namespace js

static const size_t MinCellUpdateBackgroundTasks = 2;
static const size_t MaxCellUpdateBackgroundTasks = 8;

static size_t
CellUpdateBackgroundTaskCount()
{
    if (!CanUseExtraThreads()) {
        return 0;
    }

    size_t targetTaskCount = HelperThreadState().cpuCount / 2;
    return Min(Max(targetTaskCount, MinCellUpdateBackgroundTasks), MaxCellUpdateBackgroundTasks);
}

static bool
CanUpdateKindInBackground(AllocKind kind) {
    // We try to update as many GC things in parallel as we can, but there are
    // kinds for which this might not be safe:
    //  - we assume JSObjects that are foreground finalized are not safe to
    //    update in parallel
    //  - updating a shape touches child shapes in fixupShapeTreeAfterMovingGC()
    if (!js::gc::IsBackgroundFinalized(kind) || IsShapeAllocKind(kind)) {
        return false;
    }

    return true;
}

static AllocKinds
ForegroundUpdateKinds(AllocKinds kinds)
{
    AllocKinds result;
    for (AllocKind kind : kinds) {
        if (!CanUpdateKindInBackground(kind)) {
            result += kind;
        }
    }
    return result;
}

void
GCRuntime::updateTypeDescrObjects(MovingTracer* trc, Zone* zone)
{
    // We need to update each type descriptor object and any objects stored in
    // its slots, since some of these contain array objects which also need to
    // be updated.

    zone->typeDescrObjects().sweep();

    for (auto r = zone->typeDescrObjects().all(); !r.empty(); r.popFront()) {
        NativeObject* obj = &r.front()->as<NativeObject>();
        UpdateCellPointers(trc, obj);
        for (size_t i = 0; i < obj->slotSpan(); i++) {
            Value value = obj->getSlot(i);
            if (value.isObject()) {
                UpdateCellPointers(trc, &value.toObject());
            }
        }
    }
}

void
GCRuntime::updateCellPointers(Zone* zone, AllocKinds kinds, size_t bgTaskCount)
{
    AllocKinds fgKinds = bgTaskCount == 0 ? kinds : ForegroundUpdateKinds(kinds);
    AllocKinds bgKinds = kinds - fgKinds;

    ArenasToUpdate fgArenas(zone, fgKinds);
    ArenasToUpdate bgArenas(zone, bgKinds);
    Maybe<UpdatePointersTask> fgTask;
    Maybe<UpdatePointersTask> bgTasks[MaxCellUpdateBackgroundTasks];

    size_t tasksStarted = 0;

    {
        AutoLockHelperThreadState lock;

        fgTask.emplace(rt, &fgArenas, lock);

        for (size_t i = 0; i < bgTaskCount && !bgArenas.done(); i++) {
            bgTasks[i].emplace(rt, &bgArenas, lock);
            startTask(*bgTasks[i], gcstats::PhaseKind::COMPACT_UPDATE_CELLS, lock);
            tasksStarted++;
        }
    }

    fgTask->runFromMainThread(rt);

    {
        AutoLockHelperThreadState lock;

        for (size_t i = 0; i < tasksStarted; i++) {
            joinTask(*bgTasks[i], gcstats::PhaseKind::COMPACT_UPDATE_CELLS, lock);
        }
        for (size_t i = tasksStarted; i < MaxCellUpdateBackgroundTasks; i++) {
            MOZ_ASSERT(bgTasks[i].isNothing());
        }
    }
}

// After cells have been relocated any pointers to a cell's old locations must
// be updated to point to the new location.  This happens by iterating through
// all cells in heap and tracing their children (non-recursively) to update
// them.
//
// This is complicated by the fact that updating a GC thing sometimes depends on
// making use of other GC things.  After a moving GC these things may not be in
// a valid state since they may contain pointers which have not been updated
// yet.
//
// The main dependencies are:
//
//   - Updating a JSObject makes use of its shape
//   - Updating a typed object makes use of its type descriptor object
//
// This means we require at least three phases for update:
//
//  1) shapes
//  2) typed object type descriptor objects
//  3) all other objects
//
// Also, there can be data races calling IsForwarded() on the new location of a
// cell that is being updated in parallel on another thread. This can be avoided
// by updating some kinds of cells in different phases. This is done for JSScripts
// and LazyScripts, and JSScripts and Scopes.
//
// Since we want to minimize the number of phases, arrange kinds into three
// arbitrary phases.

static const AllocKinds UpdatePhaseOne {
    AllocKind::SCRIPT,
    AllocKind::BASE_SHAPE,
    AllocKind::SHAPE,
    AllocKind::ACCESSOR_SHAPE,
    AllocKind::OBJECT_GROUP,
    AllocKind::STRING,
    AllocKind::JITCODE
};

// UpdatePhaseTwo is typed object descriptor objects.

static const AllocKinds UpdatePhaseThree {
    AllocKind::LAZY_SCRIPT,
    AllocKind::SCOPE,
    AllocKind::FUNCTION,
    AllocKind::FUNCTION_EXTENDED,
    AllocKind::OBJECT0,
    AllocKind::OBJECT0_BACKGROUND,
    AllocKind::OBJECT2,
    AllocKind::OBJECT2_BACKGROUND,
    AllocKind::OBJECT4,
    AllocKind::OBJECT4_BACKGROUND,
    AllocKind::OBJECT8,
    AllocKind::OBJECT8_BACKGROUND,
    AllocKind::OBJECT12,
    AllocKind::OBJECT12_BACKGROUND,
    AllocKind::OBJECT16,
    AllocKind::OBJECT16_BACKGROUND
};

void
GCRuntime::updateAllCellPointers(MovingTracer* trc, Zone* zone)
{
    size_t bgTaskCount = CellUpdateBackgroundTaskCount();

    updateCellPointers(zone, UpdatePhaseOne, bgTaskCount);

    // UpdatePhaseTwo: Update TypeDescrs before all other objects as typed
    // objects access these objects when we trace them.
    updateTypeDescrObjects(trc, zone);

    updateCellPointers(zone, UpdatePhaseThree, bgTaskCount);
}

/*
 * Update pointers to relocated cells in a single zone by doing a traversal of
 * that zone's arenas and calling per-zone sweep hooks.
 *
 * The latter is necessary to update weak references which are not marked as
 * part of the traversal.
 */
void
GCRuntime::updateZonePointersToRelocatedCells(Zone* zone)
{
    MOZ_ASSERT(!rt->isBeingDestroyed());
    MOZ_ASSERT(zone->isGCCompacting());

    AutoTouchingGrayThings tgt;

    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_UPDATE);
    MovingTracer trc(rt);

    zone->fixupAfterMovingGC();

    // Fixup compartment global pointers as these get accessed during marking.
    for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
        comp->fixupAfterMovingGC();
    }

    zone->externalStringCache().purge();
    zone->functionToStringCache().purge();

    // Iterate through all cells that can contain relocatable pointers to update
    // them. Since updating each cell is independent we try to parallelize this
    // as much as possible.
    updateAllCellPointers(&trc, zone);

    // Mark roots to update them.
    {
        gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK_ROOTS);

        WeakMapBase::traceZone(zone, &trc);
    }

    // Sweep everything to fix up weak pointers.
    sweepZoneAfterCompacting(zone);

    // Call callbacks to get the rest of the system to fixup other untraced pointers.
    for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
        callWeakPointerCompartmentCallbacks(comp);
    }
}

/*
 * Update runtime-wide pointers to relocated cells.
 */
void
GCRuntime::updateRuntimePointersToRelocatedCells(AutoGCSession& session)
{
    MOZ_ASSERT(!rt->isBeingDestroyed());

    gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::COMPACT_UPDATE);
    MovingTracer trc(rt);

    Compartment::fixupCrossCompartmentWrappersAfterMovingGC(&trc);

    rt->geckoProfiler().fixupStringsMapAfterMovingGC();

    traceRuntimeForMajorGC(&trc, session);

    // Mark roots to update them.
    {
        gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::MARK_ROOTS);
        Debugger::traceAllForMovingGC(&trc);
        Debugger::traceIncomingCrossCompartmentEdges(&trc);

        // Mark all gray roots, making sure we call the trace callback to get the
        // current set.
        if (JSTraceDataOp op = grayRootTracer.op) {
            (*op)(&trc, grayRootTracer.data);
        }
    }

    // Sweep everything to fix up weak pointers.
    Debugger::sweepAll(rt->defaultFreeOp());
    jit::JitRuntime::SweepJitcodeGlobalTable(rt);
    for (JS::detail::WeakCacheBase* cache : rt->weakCaches()) {
        cache->sweep();
    }

    // Type inference may put more blocks here to free.
    {
        AutoLockHelperThreadState lock;
        blocksToFreeAfterSweeping.ref().freeAll();
    }

    // Call callbacks to get the rest of the system to fixup other untraced pointers.
    callWeakPointerZonesCallbacks();
}

void
GCRuntime::protectAndHoldArenas(Arena* arenaList)
{
    for (Arena* arena = arenaList; arena; ) {
        MOZ_ASSERT(arena->allocated());
        Arena* next = arena->next;
        if (!next) {
            // Prepend to hold list before we protect the memory.
            arena->next = relocatedArenasToRelease;
            relocatedArenasToRelease = arenaList;
        }
        ProtectPages(arena, ArenaSize);
        arena = next;
    }
}

void
GCRuntime::unprotectHeldRelocatedArenas()
{
    for (Arena* arena = relocatedArenasToRelease; arena; arena = arena->next) {
        UnprotectPages(arena, ArenaSize);
        MOZ_ASSERT(arena->allocated());
    }
}

void
GCRuntime::releaseRelocatedArenas(Arena* arenaList)
{
    AutoLockGC lock(rt);
    releaseRelocatedArenasWithoutUnlocking(arenaList, lock);
}

void
GCRuntime::releaseRelocatedArenasWithoutUnlocking(Arena* arenaList, const AutoLockGC& lock)
{
    // Release the relocated arenas, now containing only forwarding pointers
    unsigned count = 0;
    while (arenaList) {
        Arena* arena = arenaList;
        arenaList = arenaList->next;

        // Clear the mark bits
        arena->unmarkAll();

        // Mark arena as empty
        arena->setAsFullyUnused();

#if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL)
        JS_POISON(reinterpret_cast<void*>(arena->thingsStart()),
                  JS_MOVED_TENURED_PATTERN, arena->getThingsSpan(),
                  MemCheckKind::MakeNoAccess);
#endif

        releaseArena(arena, lock);
        ++count;
    }
}

// In debug mode we don't always release relocated arenas straight away.
// Sometimes protect them instead and hold onto them until the next GC sweep
// phase to catch any pointers to them that didn't get forwarded.

void
GCRuntime::releaseHeldRelocatedArenas()
{
#ifdef DEBUG
    unprotectHeldRelocatedArenas();
    Arena* arenas = relocatedArenasToRelease;
    relocatedArenasToRelease = nullptr;
    releaseRelocatedArenas(arenas);
#endif
}

void
GCRuntime::releaseHeldRelocatedArenasWithoutUnlocking(const AutoLockGC& lock)
{
#ifdef DEBUG
    unprotectHeldRelocatedArenas();
    releaseRelocatedArenasWithoutUnlocking(relocatedArenasToRelease, lock);
    relocatedArenasToRelease = nullptr;
#endif
}

FreeLists::FreeLists()
{
    for (auto i : AllAllocKinds()) {
        freeLists_[i] = &emptySentinel;
    }
}

ArenaLists::ArenaLists(Zone* zone)
  : zone_(zone),
    freeLists_(zone),
    arenaLists_(zone),
    arenaListsToSweep_(),
    incrementalSweptArenaKind(zone, AllocKind::LIMIT),
    incrementalSweptArenas(zone),
    gcShapeArenasToUpdate(zone, nullptr),
    gcAccessorShapeArenasToUpdate(zone, nullptr),
    gcScriptArenasToUpdate(zone, nullptr),
    gcObjectGroupArenasToUpdate(zone, nullptr),
    savedEmptyArenas(zone, nullptr)
{
    for (auto i : AllAllocKinds()) {
        concurrentUse(i) = ConcurrentUse::None;
        arenaListsToSweep(i) = nullptr;
    }
}

void
ReleaseArenaList(JSRuntime* rt, Arena* arena, const AutoLockGC& lock)
{
    Arena* next;
    for (; arena; arena = next) {
        next = arena->next;
        rt->gc.releaseArena(arena, lock);
    }
}

ArenaLists::~ArenaLists()
{
    AutoLockGC lock(runtime());

    for (auto i : AllAllocKinds()) {
        /*
         * We can only call this during the shutdown after the last GC when
         * the background finalization is disabled.
         */
        MOZ_ASSERT(concurrentUse(i) == ConcurrentUse::None);
        ReleaseArenaList(runtime(), arenaLists(i).head(), lock);
    }
    ReleaseArenaList(runtime(), incrementalSweptArenas.ref().head(), lock);

    ReleaseArenaList(runtime(), savedEmptyArenas, lock);
}

void
ArenaLists::queueForForegroundSweep(FreeOp* fop, const FinalizePhase& phase)
{
    gcstats::AutoPhase ap(fop->runtime()->gc.stats(), phase.statsPhase);
    for (auto kind : phase.kinds) {
        queueForForegroundSweep(kind);
    }
}

void
ArenaLists::queueForForegroundSweep(AllocKind thingKind)
{
    MOZ_ASSERT(!IsBackgroundFinalized(thingKind));
    MOZ_ASSERT(concurrentUse(thingKind) == ConcurrentUse::None);
    MOZ_ASSERT(!arenaListsToSweep(thingKind));

    arenaListsToSweep(thingKind) = arenaLists(thingKind).head();
    arenaLists(thingKind).clear();
}

void
ArenaLists::queueForBackgroundSweep(FreeOp* fop, const FinalizePhase& phase)
{
    gcstats::AutoPhase ap(fop->runtime()->gc.stats(), phase.statsPhase);
    for (auto kind : phase.kinds) {
        queueForBackgroundSweep(kind);
    }
}

inline void
ArenaLists::queueForBackgroundSweep(AllocKind thingKind)
{
    MOZ_ASSERT(IsBackgroundFinalized(thingKind));

    ArenaList* al = &arenaLists(thingKind);
    if (al->isEmpty()) {
        MOZ_ASSERT(concurrentUse(thingKind) == ConcurrentUse::None);
        return;
    }

    MOZ_ASSERT(concurrentUse(thingKind) == ConcurrentUse::None);

    arenaListsToSweep(thingKind) = al->head();
    al->clear();
    concurrentUse(thingKind) = ConcurrentUse::BackgroundFinalize;
}

/*static*/ void
ArenaLists::backgroundFinalize(FreeOp* fop, Arena* listHead, Arena** empty)
{
    MOZ_ASSERT(listHead);
    MOZ_ASSERT(empty);

    AllocKind thingKind = listHead->getAllocKind();
    Zone* zone = listHead->zone;

    size_t thingsPerArena = Arena::thingsPerArena(thingKind);
    SortedArenaList finalizedSorted(thingsPerArena);

    auto unlimited = SliceBudget::unlimited();
    FinalizeArenas(fop, &listHead, finalizedSorted, thingKind, unlimited, KEEP_ARENAS);
    MOZ_ASSERT(!listHead);

    finalizedSorted.extractEmpty(empty);

    // When arenas are queued for background finalization, all arenas are moved
    // to arenaListsToSweep[], leaving the arenaLists[] empty. However, new
    // arenas may be allocated before background finalization finishes; now that
    // finalization is complete, we want to merge these lists back together.
    ArenaLists* lists = &zone->arenas;
    ArenaList* al = &lists->arenaLists(thingKind);

    // Flatten |finalizedSorted| into a regular ArenaList.
    ArenaList finalized = finalizedSorted.toArenaList();

    // We must take the GC lock to be able to safely modify the ArenaList;
    // however, this does not by itself make the changes visible to all threads,
    // as not all threads take the GC lock to read the ArenaLists.
    // That safety is provided by the ReleaseAcquire memory ordering of the
    // background finalize state, which we explicitly set as the final step.
    {
        AutoLockGC lock(lists->runtimeFromAnyThread());
        MOZ_ASSERT(lists->concurrentUse(thingKind) == ConcurrentUse::BackgroundFinalize);

        // Join |al| and |finalized| into a single list.
        *al = finalized.insertListWithCursorAtEnd(*al);

        lists->arenaListsToSweep(thingKind) = nullptr;
    }

    lists->concurrentUse(thingKind) = ConcurrentUse::None;
}

void
ArenaLists::releaseForegroundSweptEmptyArenas()
{
    AutoLockGC lock(runtime());
    ReleaseArenaList(runtime(), savedEmptyArenas, lock);
    savedEmptyArenas = nullptr;
}

void
ArenaLists::queueForegroundThingsForSweep()
{
    gcShapeArenasToUpdate = arenaListsToSweep(AllocKind::SHAPE);
    gcAccessorShapeArenasToUpdate = arenaListsToSweep(AllocKind::ACCESSOR_SHAPE);
    gcObjectGroupArenasToUpdate = arenaListsToSweep(AllocKind::OBJECT_GROUP);
    gcScriptArenasToUpdate = arenaListsToSweep(AllocKind::SCRIPT);
}

TimeStamp SliceBudget::unlimitedDeadline;

void
SliceBudget::Init()
{
    MOZ_ASSERT(!unlimitedDeadline);
    uint64_t oneYearsInSeconds = 365 * 24 * 60 * 60;
    unlimitedDeadline = ReallyNow() + TimeDuration::FromSeconds(100 * oneYearsInSeconds);
}

SliceBudget::SliceBudget()
  : timeBudget(UnlimitedTimeBudget), workBudget(UnlimitedWorkBudget)
{
    makeUnlimited();
}

SliceBudget::SliceBudget(TimeBudget time)
  : timeBudget(time), workBudget(UnlimitedWorkBudget)
{
    if (time.budget < 0) {
        makeUnlimited();
    } else {
        // Note: TimeBudget(0) is equivalent to WorkBudget(CounterReset).
        deadline = ReallyNow() + TimeDuration::FromMilliseconds(time.budget);
        counter = CounterReset;
    }
}

SliceBudget::SliceBudget(WorkBudget work)
  : timeBudget(UnlimitedTimeBudget), workBudget(work)
{
    if (work.budget < 0) {
        makeUnlimited();
    } else {
        deadline = TimeStamp();
        counter = work.budget;
    }
}

int
SliceBudget::describe(char* buffer, size_t maxlen) const
{
    if (isUnlimited()) {
        return snprintf(buffer, maxlen, "unlimited");
    } else if (isWorkBudget()) {
        return snprintf(buffer, maxlen, "work(%" PRId64 ")", workBudget.budget);
    } else {
        return snprintf(buffer, maxlen, "%" PRId64 "ms", timeBudget.budget);
    }
}

bool
SliceBudget::checkOverBudget()
{
    if (deadline.IsNull()) {
        return true;
    }

    bool over = ReallyNow() >= deadline;
    if (!over) {
        counter = CounterReset;
    }
    return over;
}

void
GCRuntime::requestMajorGC(JS::gcreason::Reason reason)
{
    MOZ_ASSERT(!CurrentThreadIsPerformingGC());

    if (majorGCRequested()) {
        return;
    }

    majorGCTriggerReason = reason;
    rt->mainContextFromOwnThread()->requestInterrupt(InterruptReason::GC);
}

void
Nursery::requestMinorGC(JS::gcreason::Reason reason) const
{
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    MOZ_ASSERT(!CurrentThreadIsPerformingGC());

    if (minorGCRequested()) {
        return;
    }

    minorGCTriggerReason_ = reason;
    runtime()->mainContextFromOwnThread()->requestInterrupt(InterruptReason::GC);
}

// Return false if a pending GC may not occur because we are recording or
// replaying. GCs must occur at the same points when replaying as they did
// while recording, so any trigger reasons whose behavior is non-deterministic
// between recording and replaying are excluded here.
//
// Non-deterministic behaviors here are very narrow: the amount of malloc'ed
// memory or memory used by GC things may vary between recording or replaying,
// but other behaviors that would normally be non-deterministic (timers and so
// forth) are captured in the recording and replayed exactly.
static bool
RecordReplayCheckCanGC(JS::gcreason::Reason reason)
{
    if (!mozilla::recordreplay::IsRecordingOrReplaying()) {
        return true;
    }

    switch (reason) {
      case JS::gcreason::EAGER_ALLOC_TRIGGER:
      case JS::gcreason::LAST_DITCH:
      case JS::gcreason::TOO_MUCH_MALLOC:
      case JS::gcreason::ALLOC_TRIGGER:
      case JS::gcreason::DELAYED_ATOMS_GC:
      case JS::gcreason::TOO_MUCH_WASM_MEMORY:
        return false;

      default:
        break;
    }

    // If the above filter misses a non-deterministically triggered GC, this
    // assertion will fail.
    mozilla::recordreplay::RecordReplayAssert("RecordReplayCheckCanGC %d", (int) reason);
    return true;
}

bool
GCRuntime::triggerGC(JS::gcreason::Reason reason)
{
    /*
     * Don't trigger GCs if this is being called off the main thread from
     * onTooMuchMalloc().
     */
    if (!CurrentThreadCanAccessRuntime(rt)) {
        return false;
    }

    /* GC is already running. */
    if (JS::RuntimeHeapIsCollecting()) {
        return false;
    }

    // GCs can only be triggered in certain ways when recording/replaying.
    if (!RecordReplayCheckCanGC(reason)) {
        return false;
    }

    JS::PrepareForFullGC(rt->mainContextFromOwnThread());
    requestMajorGC(reason);
    return true;
}

void
GCRuntime::maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock)
{
    if (!CurrentThreadCanAccessRuntime(rt)) {
        // Zones in use by a helper thread can't be collected.
        MOZ_ASSERT(zone->usedByHelperThread() || zone->isAtomsZone());
        return;
    }

    MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());

    size_t usedBytes = zone->usage.gcBytes();
    size_t thresholdBytes = zone->threshold.gcTriggerBytes();

    if (usedBytes >= thresholdBytes) {
        // The threshold has been surpassed, immediately trigger a GC, which
        // will be done non-incrementally.
        triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, thresholdBytes);
        return;
    }

    bool wouldInterruptCollection = isIncrementalGCInProgress() && !zone->isCollecting();
    float zoneGCThresholdFactor =
        wouldInterruptCollection ? tunables.allocThresholdFactorAvoidInterrupt()
                                 : tunables.allocThresholdFactor();

    size_t igcThresholdBytes = thresholdBytes * zoneGCThresholdFactor;

    if (usedBytes >= igcThresholdBytes) {
        // Reduce the delay to the start of the next incremental slice.
        if (zone->gcDelayBytes < ArenaSize) {
            zone->gcDelayBytes = 0;
        } else {
            zone->gcDelayBytes -= ArenaSize;
        }

        if (!zone->gcDelayBytes) {
            // Start or continue an in progress incremental GC. We do this
            // to try to avoid performing non-incremental GCs on zones
            // which allocate a lot of data, even when incremental slices
            // can't be triggered via scheduling in the event loop.
            triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, igcThresholdBytes);

            // Delay the next slice until a certain amount of allocation
            // has been performed.
            zone->gcDelayBytes = tunables.zoneAllocDelayBytes();
            return;
        }
    }
}

bool
GCRuntime::triggerZoneGC(Zone* zone, JS::gcreason::Reason reason, size_t used, size_t threshold)
{
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));

    /* GC is already running. */
    if (JS::RuntimeHeapIsBusy()) {
        return false;
    }

    // GCs can only be triggered in certain ways when recording/replaying.
    if (!RecordReplayCheckCanGC(reason)) {
        return false;
    }

#ifdef JS_GC_ZEAL
    if (hasZealMode(ZealMode::Alloc)) {
        MOZ_RELEASE_ASSERT(triggerGC(reason));
        return true;
    }
#endif

    if (zone->isAtomsZone()) {
        /* We can't do a zone GC of just the atoms zone. */
        if (rt->hasHelperThreadZones()) {
            /* We can't collect atoms while off-thread parsing is allocating. */
            fullGCForAtomsRequested_ = true;
            return false;
        }
        stats().recordTrigger(used, threshold);
        MOZ_RELEASE_ASSERT(triggerGC(reason));
        return true;
    }

    stats().recordTrigger(used, threshold);
    PrepareZoneForGC(zone);
    requestMajorGC(reason);
    return true;
}

void
GCRuntime::maybeGC(Zone* zone)
{
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));

#ifdef JS_GC_ZEAL
    if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) {
        JS::PrepareForFullGC(rt->mainContextFromOwnThread());
        gc(GC_NORMAL, JS::gcreason::DEBUG_GC);
        return;
    }
#endif

    if (gcIfRequested()) {
        return;
    }

    float threshold = zone->threshold.eagerAllocTrigger(schedulingState.inHighFrequencyGCMode());
    float usedBytes = zone->usage.gcBytes();
    if (usedBytes > 1024 * 1024 && usedBytes >= threshold &&
        !isIncrementalGCInProgress() && !isBackgroundSweeping())
    {
        stats().recordTrigger(usedBytes, threshold);
        PrepareZoneForGC(zone);
        startGC(GC_NORMAL, JS::gcreason::EAGER_ALLOC_TRIGGER);
    }
}

void
GCRuntime::triggerFullGCForAtoms(JSContext* cx)
{
    MOZ_ASSERT(fullGCForAtomsRequested_);
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
    MOZ_ASSERT(cx->canCollectAtoms());
    fullGCForAtomsRequested_ = false;
    MOZ_RELEASE_ASSERT(triggerGC(JS::gcreason::DELAYED_ATOMS_GC));
}

// Do all possible decommit immediately from the current thread without
// releasing the GC lock or allocating any memory.
void
GCRuntime::decommitAllWithoutUnlocking(const AutoLockGC& lock)
{
    MOZ_ASSERT(emptyChunks(lock).count() == 0);
    for (ChunkPool::Iter chunk(availableChunks(lock)); !chunk.done(); chunk.next()) {
        chunk->decommitAllArenasWithoutUnlocking(lock);
    }
    MOZ_ASSERT(availableChunks(lock).verify());
}

void
GCRuntime::startDecommit()
{
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    MOZ_ASSERT(!decommitTask.isRunning());

    // If we are allocating heavily enough to trigger "high freqency" GC, then
    // skip decommit so that we do not compete with the mutator.
    if (schedulingState.inHighFrequencyGCMode()) {
        return;
    }

    BackgroundDecommitTask::ChunkVector toDecommit;
    {
        AutoLockGC lock(rt);

        // Verify that all entries in the empty chunks pool are already decommitted.
        for (ChunkPool::Iter chunk(emptyChunks(lock)); !chunk.done(); chunk.next()) {
            MOZ_ASSERT(!chunk->info.numArenasFreeCommitted);
        }

        // Since we release the GC lock while doing the decommit syscall below,
        // it is dangerous to iterate the available list directly, as the active
        // thread could modify it concurrently. Instead, we build and pass an
        // explicit Vector containing the Chunks we want to visit.
        MOZ_ASSERT(availableChunks(lock).verify());
        for (ChunkPool::Iter iter(availableChunks(lock)); !iter.done(); iter.next()) {
            if (!toDecommit.append(iter.get())) {
                // The OOM handler does a full, immediate decommit.
                return onOutOfMallocMemory(lock);
            }
        }
    }
    decommitTask.setChunksToScan(toDecommit);

    if (sweepOnBackgroundThread && decommitTask.start()) {
        return;
    }

    decommitTask.runFromMainThread(rt);
}

void
js::gc::BackgroundDecommitTask::setChunksToScan(ChunkVector &chunks)
{
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime()));
    MOZ_ASSERT(!isRunning());
    MOZ_ASSERT(toDecommit.ref().empty());
    Swap(toDecommit.ref(), chunks);
}

void
js::gc::BackgroundDecommitTask::run()
{
    AutoLockGC lock(runtime());

    for (Chunk* chunk : toDecommit.ref()) {

        // The arena list is not doubly-linked, so we have to work in the free
        // list order and not in the natural order.
        while (chunk->info.numArenasFreeCommitted) {
            bool ok = chunk->decommitOneFreeArena(runtime(), lock);

            // If we are low enough on memory that we can't update the page
            // tables, or if we need to return for any other reason, break out
            // of the loop.
            if (cancel_ || !ok) {
                break;
            }
        }
    }
    toDecommit.ref().clearAndFree();

    ChunkPool toFree = runtime()->gc.expireEmptyChunkPool(lock);
    if (toFree.count()) {
        AutoUnlockGC unlock(lock);
        FreeChunkPool(toFree);
    }
}

void
GCRuntime::sweepBackgroundThings(ZoneList& zones, LifoAlloc& freeBlocks)
{
    freeBlocks.freeAll();

    if (zones.isEmpty()) {
        return;
    }

    FreeOp fop(nullptr);

    // Sweep zones in order. The atoms zone must be finalized last as other
    // zones may have direct pointers into it.
    while (!zones.isEmpty()) {
        Zone* zone = zones.removeFront();
        Arena* emptyArenas = nullptr;

        // We must finalize thing kinds in the order specified by
        // BackgroundFinalizePhases.
        for (auto phase : BackgroundFinalizePhases) {
            for (auto kind : phase.kinds) {
                Arena* arenas = zone->arenas.arenaListsToSweep(kind);
                MOZ_RELEASE_ASSERT(uintptr_t(arenas) != uintptr_t(-1));
                if (arenas) {
                    ArenaLists::backgroundFinalize(&fop, arenas, &emptyArenas);
                }
            }
        }

        AutoLockGC lock(rt);

        // Release any arenas that are now empty, dropping and reaquiring the GC
        // lock every so often to avoid blocking the main thread from
        // allocating chunks.
        static const size_t LockReleasePeriod = 32;
        size_t releaseCount = 0;
        Arena* next;
        for (Arena* arena = emptyArenas; arena; arena = next) {
            next = arena->next;

            // We already calculated the zone's GC trigger after foreground
            // sweeping finished. Now we must update this value.
            arena->zone->threshold.updateForRemovedArena(tunables);

            releaseArena(arena, lock);
            releaseCount++;
            if (releaseCount % LockReleasePeriod == 0) {
                lock.unlock();
                lock.lock();
            }
        }
    }
}

void
GCRuntime::assertBackgroundSweepingFinished()
{
#ifdef DEBUG
    {
        AutoLockHelperThreadState lock;
        MOZ_ASSERT(backgroundSweepZones.ref().isEmpty());
        MOZ_ASSERT(blocksToFreeAfterSweeping.ref().computedSizeOfExcludingThis() == 0);
    }

    for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        for (auto i : AllAllocKinds()) {
            MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));
            MOZ_ASSERT(zone->arenas.doneBackgroundFinalize(i));
        }
    }
#endif
}

void
GCRuntime::queueZonesForBackgroundSweep(ZoneList& zones)
{
    AutoLockHelperThreadState lock;
    backgroundSweepZones.ref().transferFrom(zones);
    if (sweepOnBackgroundThread) {
        sweepTask.startIfIdle(lock);
    }
}

void
GCRuntime::freeUnusedLifoBlocksAfterSweeping(LifoAlloc* lifo)
{
    MOZ_ASSERT(JS::RuntimeHeapIsBusy());
    AutoLockHelperThreadState lock;
    blocksToFreeAfterSweeping.ref().transferUnusedFrom(lifo);
}

void
GCRuntime::freeAllLifoBlocksAfterSweeping(LifoAlloc* lifo)
{
    MOZ_ASSERT(JS::RuntimeHeapIsBusy());
    AutoLockHelperThreadState lock;
    blocksToFreeAfterSweeping.ref().transferFrom(lifo);
}

void
GCRuntime::freeAllLifoBlocksAfterMinorGC(LifoAlloc* lifo)
{
    blocksToFreeAfterMinorGC.ref().transferFrom(lifo);
}

BackgroundSweepTask::BackgroundSweepTask(JSRuntime* rt)
  : GCParallelTaskHelper(rt),
    done(false)
{}

bool
BackgroundSweepTask::isRunning() const
{
    AutoLockHelperThreadState lock;
    return isRunningWithLockHeld(lock);
}

bool
BackgroundSweepTask::isRunningWithLockHeld(const AutoLockHelperThreadState& lock) const
{
    return Base::isRunningWithLockHeld(lock) && !done;
}

void
BackgroundSweepTask::startIfIdle(AutoLockHelperThreadState& lock)
{
    MOZ_ASSERT(CanUseExtraThreads());

    if (isRunningWithLockHeld(lock)) {
        return;
    }

    // Join the previous invocation of the task. This will return immediately
    // if the thread has never been started.
    joinWithLockHeld(lock);

    done = false;

    if (!startWithLockHeld(lock)) {
        AutoUnlockHelperThreadState unlock(lock);
        runFromMainThread(runtime());
    }
}

void
BackgroundSweepTask::runFromMainThread(JSRuntime* rt)
{
    {
        AutoLockHelperThreadState lock;
        MOZ_ASSERT(!isRunningWithLockHeld(lock));
        joinWithLockHeld(lock);
        done = false;
    }

    Base::runFromMainThread(rt);
}

void
BackgroundSweepTask::run()
{
    AutoTraceLog logSweeping(TraceLoggerForCurrentThread(), TraceLogger_GCSweeping);

    AutoLockHelperThreadState lock;
    AutoSetThreadIsSweeping threadIsSweeping;

    MOZ_ASSERT(!done);

    runtime()->gc.sweepFromBackgroundThread(lock);

    // Signal to the main thread that we're finished, because we release the
    // lock again before GCParallelTask's state is changed to finished.
    done = true;
}

void
GCRuntime::sweepFromBackgroundThread(AutoLockHelperThreadState& lock)
{
    do {
        ZoneList zones;
        zones.transferFrom(backgroundSweepZones.ref());
        LifoAlloc freeLifoAlloc(JSContext::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE);
        freeLifoAlloc.transferFrom(&blocksToFreeAfterSweeping.ref());

        AutoUnlockHelperThreadState unlock(lock);
        sweepBackgroundThings(zones, freeLifoAlloc);

        // The main thread may call queueZonesForBackgroundSweep() while this is
        // running so we must check there is no more work after releasing the
        // lock.
    } while (!backgroundSweepZones.ref().isEmpty());
}

void
GCRuntime::waitBackgroundSweepEnd()
{
    sweepTask.join();

    // TODO: Improve assertion to work in incremental GC?
    if (!isIncrementalGCInProgress()) {
        assertBackgroundSweepingFinished();
    }
}

struct IsAboutToBeFinalizedFunctor {
    template <typename T> bool operator()(Cell** t) {
        mozilla::DebugOnly<const Cell*> prior = *t;
        bool result = IsAboutToBeFinalizedUnbarriered(reinterpret_cast<T**>(t));
        // Sweep should not have to deal with moved pointers, since moving GC
        // handles updating the UID table manually.
        MOZ_ASSERT(*t == prior);
        return result;
    }
};

/* static */ bool
UniqueIdGCPolicy::needsSweep(Cell** cell, uint64_t*)
{
    return DispatchTraceKindTyped(IsAboutToBeFinalizedFunctor(), (*cell)->getTraceKind(), cell);
}

void
JS::Zone::sweepUniqueIds()
{
    uniqueIds().sweep();
}

void
Realm::destroy(FreeOp* fop)
{
    JSRuntime* rt = fop->runtime();
    if (auto callback = rt->destroyRealmCallback) {
        callback(fop, this);
    }
    if (principals()) {
        JS_DropPrincipals(rt->mainContextFromOwnThread(), principals());
    }
    fop->delete_(this);
}

void
Compartment::destroy(FreeOp* fop)
{
    JSRuntime* rt = fop->runtime();
    if (auto callback = rt->destroyCompartmentCallback) {
        callback(fop, this);
    }
    fop->delete_(this);
    rt->gc.stats().sweptCompartment();
}

void
Zone::destroy(FreeOp* fop)
{
    MOZ_ASSERT(compartments().empty());
    fop->delete_(this);
    fop->runtime()->gc.stats().sweptZone();
}

/*
 * It's simpler if we preserve the invariant that every zone (except the atoms
 * zone) has at least one compartment, and every compartment has at least one
 * realm. If we know we're deleting the entire zone, then sweepCompartments is
 * allowed to delete all compartments. In this case, |keepAtleastOne| is false.
 * If any cells remain alive in the zone, set |keepAtleastOne| true to prohibit
 * sweepCompartments from deleting every compartment. Instead, it preserves an
 * arbitrary compartment in the zone.
 */
void
Zone::sweepCompartments(FreeOp* fop, bool keepAtleastOne, bool destroyingRuntime)
{
    MOZ_ASSERT(!compartments().empty());
    MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);

    Compartment** read = compartments().begin();
    Compartment** end = compartments().end();
    Compartment** write = read;
    while (read < end) {
        Compartment* comp = *read++;

        /*
         * Don't delete the last compartment and realm if keepAtleastOne is
         * still true, meaning all the other compartments were deleted.
         */
        bool keepAtleastOneRealm = read == end && keepAtleastOne;
        comp->sweepRealms(fop, keepAtleastOneRealm, destroyingRuntime);

        if (!comp->realms().empty()) {
            *write++ = comp;
            keepAtleastOne = false;
        } else {
            comp->destroy(fop);
        }
    }
    compartments().shrinkTo(write - compartments().begin());
    MOZ_ASSERT_IF(keepAtleastOne, !compartments().empty());
    MOZ_ASSERT_IF(destroyingRuntime, compartments().empty());
}

void
Compartment::sweepRealms(FreeOp* fop, bool keepAtleastOne, bool destroyingRuntime)
{
    MOZ_ASSERT(!realms().empty());
    MOZ_ASSERT_IF(destroyingRuntime, !keepAtleastOne);

    Realm** read = realms().begin();
    Realm** end = realms().end();
    Realm** write = read;
    while (read < end) {
        Realm* realm = *read++;

        /*
         * Don't delete the last realm if keepAtleastOne is still true, meaning
         * all the other realms were deleted.
         */
        bool dontDelete = read == end && keepAtleastOne;
        if ((realm->marked() || dontDelete) && !destroyingRuntime) {
            *write++ = realm;
            keepAtleastOne = false;
        } else {
            realm->destroy(fop);
        }
    }
    realms().shrinkTo(write - realms().begin());
    MOZ_ASSERT_IF(keepAtleastOne, !realms().empty());
    MOZ_ASSERT_IF(destroyingRuntime, realms().empty());
}

void
GCRuntime::deleteEmptyZone(Zone* zone)
{
    MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt));
    MOZ_ASSERT(zone->compartments().empty());
    for (auto& i : zones()) {
        if (i == zone) {
            zones().erase(&i);
            zone->destroy(rt->defaultFreeOp());
            return;
        }
    }
    MOZ_CRASH("Zone not found");
}

void
GCRuntime::sweepZones(FreeOp* fop, bool destroyingRuntime)
{
    MOZ_ASSERT_IF(destroyingRuntime, numActiveZoneIters == 0);
    MOZ_ASSERT_IF(destroyingRuntime, arenasEmptyAtShutdown);

    if (numActiveZoneIters) {
        return;
    }

    assertBackgroundSweepingFinished();

    Zone** read = zones().begin();
    Zone** end = zones().end();
    Zone** write = read;

    while (read < end) {
        Zone* zone = *read++;

        if (zone->wasGCStarted()) {
            MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
            const bool zoneIsDead = zone->arenas.arenaListsAreEmpty() &&
                                    !zone->hasMarkedRealms();
            if (zoneIsDead || destroyingRuntime) {
            {
                // We have just finished sweeping, so we should have freed any
                // empty arenas back to their Chunk for future allocation.
                zone->arenas.checkEmptyFreeLists();
            }

                // We are about to delete the Zone; this will leave the Zone*
                // in the arena header dangling if there are any arenas
                // remaining at this point.
#ifdef DEBUG
                if (!zone->arenas.checkEmptyArenaLists()) {
                    arenasEmptyAtShutdown = false;
                }
#endif

                zone->sweepCompartments(fop, false, destroyingRuntime);
                MOZ_ASSERT(zone->compartments().empty());
                MOZ_ASSERT_IF(arenasEmptyAtShutdown, zone->typeDescrObjects().empty());
                zone->destroy(fop);
                continue;
            }
            zone->sweepCompartments(fop, true, destroyingRuntime);
        }
        *write++ = zone;
    }
    zones().shrinkTo(write - zones().begin());
}

#ifdef DEBUG
static const char*
AllocKindToAscii(AllocKind kind)
{
    switch(kind) {
#define MAKE_CASE(allocKind, traceKind, type, sizedType, bgFinal, nursery, compact) \
      case AllocKind:: allocKind: return #allocKind;
FOR_EACH_ALLOCKIND(MAKE_CASE)
#undef MAKE_CASE

      default:
        MOZ_CRASH("Unknown AllocKind in AllocKindToAscii");
    }
}
#endif // DEBUG

bool
ArenaLists::checkEmptyArenaList(AllocKind kind)
{
    bool isEmpty = true;
#ifdef DEBUG
    size_t numLive = 0;
    if (!arenaLists(kind).isEmpty()) {
        isEmpty = false;
        size_t maxCells = 20;
        char *env = getenv("JS_GC_MAX_LIVE_CELLS");
        if (env && *env) {
            maxCells = atol(env);
        }
        for (Arena* current = arenaLists(kind).head(); current; current = current->next) {
            for (ArenaCellIterUnderGC i(current); !i.done(); i.next()) {
                TenuredCell* t = i.getCell();
                MOZ_ASSERT(t->isMarkedAny(), "unmarked cells should have been finalized");
                if (++numLive <= maxCells) {
                    fprintf(stderr, "ERROR: GC found live Cell %p of kind %s at shutdown\n",
                            t, AllocKindToAscii(kind));
                }
            }
        }
        if (numLive > 0) {
          fprintf(stderr, "ERROR: GC found %zu live Cells at shutdown\n", numLive);
        } else {
          fprintf(stderr, "ERROR: GC found empty Arenas at shutdown\n");
        }
    }
#endif // DEBUG
    return isEmpty;
}

class MOZ_RAII js::gc::AutoRunParallelTask : public GCParallelTask
{
    gcstats::PhaseKind phase_;
    AutoLockHelperThreadState& lock_;

  public:
    AutoRunParallelTask(JSRuntime* rt, TaskFunc func, gcstats::PhaseKind phase,
                        AutoLockHelperThreadState& lock)
      : GCParallelTask(rt, func),
        phase_(phase),
        lock_(lock)
    {
        runtime()->gc.startTask(*this, phase_, lock_);
    }

    ~AutoRunParallelTask() {
        runtime()->gc.joinTask(*this, phase_, lock_);
    }
};

void
GCRuntime::purgeRuntimeForMinorGC()
{
    // If external strings become nursery allocable, remember to call
    // zone->externalStringCache().purge() (and delete this assert.)
    MOZ_ASSERT(!IsNurseryAllocable(AllocKind::EXTERNAL_STRING));

    for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
        zone->functionToStringCache().purge();
    }

    rt->caches().purgeForMinorGC(rt);
}

void
GCRuntime::purgeRuntime()
{
    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::PURGE);

    for (GCRealmsIter realm(rt); !realm.done(); realm.next()) {
        realm->purge();
    }

    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        zone->purgeAtomCacheOrDefer();
        zone->externalStringCache().purge();
        zone->functionToStringCache().purge();
    }

    JSContext* cx = rt->mainContextFromOwnThread();
    freeUnusedLifoBlocksAfterSweeping(&cx->tempLifoAlloc());
    cx->interpreterStack().purge(rt);
    cx->frontendCollectionPool().purge();

    rt->caches().purge();

    if (auto cache = rt->maybeThisRuntimeSharedImmutableStrings()) {
        cache->purge();
    }

    MOZ_ASSERT(unmarkGrayStack.empty());
    unmarkGrayStack.clearAndFree();

    // If we're the main runtime, tell helper threads to free their unused
    // memory when they are next idle.
    if (!rt->parentRuntime) {
        HelperThreadState().triggerFreeUnusedMemory();
    }
}

bool
GCRuntime::shouldPreserveJITCode(Realm* realm, const TimeStamp &currentTime,
                                 JS::gcreason::Reason reason, bool canAllocateMoreCode)
{
    static const auto oneSecond = TimeDuration::FromSeconds(1);

    if (cleanUpEverything) {
        return false;
    }
    if (!canAllocateMoreCode) {
        return false;
    }

    if (alwaysPreserveCode) {
        return true;
    }
    if (realm->preserveJitCode()) {
        return true;
    }

    const auto &lastAnimationTime = realm->lastAnimationTime.ref();
    if (!lastAnimationTime.IsNull() && lastAnimationTime + oneSecond >= currentTime) {
        return true;
    }

    if (reason == JS::gcreason::DEBUG_GC) {
        return true;
    }

    return false;
}

#ifdef DEBUG
class CompartmentCheckTracer : public JS::CallbackTracer
{
    void onChild(const JS::GCCellPtr& thing) override;

  public:
    explicit CompartmentCheckTracer(JSRuntime* rt)
      : JS::CallbackTracer(rt), src(nullptr), zone(nullptr), compartment(nullptr)
    {}

    Cell* src;
    JS::TraceKind srcKind;
    Zone* zone;
    Compartment* compartment;
};

namespace {
struct IsDestComparatorFunctor {
    JS::GCCellPtr dst_;
    explicit IsDestComparatorFunctor(JS::GCCellPtr dst) : dst_(dst) {}

    template <typename T> bool operator()(T* t) { return (*t) == dst_.asCell(); }
};
} // namespace (anonymous)

static bool
InCrossCompartmentMap(JSObject* src, JS::GCCellPtr dst)
{
    Compartment* srccomp = src->compartment();

    if (dst.is<JSObject>()) {
        Value key = ObjectValue(dst.as<JSObject>());
        if (WrapperMap::Ptr p = srccomp->lookupWrapper(key)) {
            if (*p->value().unsafeGet() == ObjectValue(*src)) {
                return true;
            }
        }
    }

    /*
     * If the cross-compartment edge is caused by the debugger, then we don't
     * know the right hashtable key, so we have to iterate.
     */
    for (Compartment::WrapperEnum e(srccomp); !e.empty(); e.popFront()) {
        if (e.front().mutableKey().applyToWrapped(IsDestComparatorFunctor(dst)) &&
            ToMarkable(e.front().value().unbarrieredGet()) == src)
        {
            return true;
        }
    }

    return false;
}

struct MaybeCompartmentFunctor {
    template <typename T> JS::Compartment* operator()(T* t) { return t->maybeCompartment(); }
};

void
CompartmentCheckTracer::onChild(const JS::GCCellPtr& thing)
{
    Compartment* comp = DispatchTyped(MaybeCompartmentFunctor(), thing);
    if (comp && compartment) {
        MOZ_ASSERT(comp == compartment ||
                   (srcKind == JS::TraceKind::Object &&
                    InCrossCompartmentMap(static_cast<JSObject*>(src), thing)));
    } else {
        TenuredCell* tenured = TenuredCell::fromPointer(thing.asCell());
        Zone* thingZone = tenured->zoneFromAnyThread();
        MOZ_ASSERT(thingZone == zone || thingZone->isAtomsZone());
    }
}

void
GCRuntime::checkForCompartmentMismatches()
{
    JSContext* cx = rt->mainContextFromOwnThread();
    if (cx->disableStrictProxyCheckingCount) {
        return;
    }

    CompartmentCheckTracer trc(rt);
    AutoAssertEmptyNursery empty(cx);
    for (ZonesIter zone(rt, SkipAtoms); !zone.done(); zone.next()) {
        trc.zone = zone;
        for (auto thingKind : AllAllocKinds()) {
            for (auto i = zone->cellIter<TenuredCell>(thingKind, empty); !i.done(); i.next()) {
                trc.src = i.getCell();
                trc.srcKind = MapAllocToTraceKind(thingKind);
                trc.compartment = DispatchTraceKindTyped(MaybeCompartmentFunctor(),
                                                         trc.src, trc.srcKind);
                js::TraceChildren(&trc, trc.src, trc.srcKind);
            }
        }
    }
}
#endif

static void
RelazifyFunctions(Zone* zone, AllocKind kind)
{
    MOZ_ASSERT(kind == AllocKind::FUNCTION ||
               kind == AllocKind::FUNCTION_EXTENDED);

    JSRuntime* rt = zone->runtimeFromMainThread();
    AutoAssertEmptyNursery empty(rt->mainContextFromOwnThread());

    for (auto i = zone->cellIter<JSObject>(kind, empty); !i.done(); i.next()) {
        JSFunction* fun = &i->as<JSFunction>();
        if (fun->hasScript()) {
            fun->maybeRelazify(rt);
        }
    }
}

static bool
ShouldCollectZone(Zone* zone, JS::gcreason::Reason reason)
{
    // If we are repeating a GC because we noticed dead compartments haven't
    // been collected, then only collect zones containing those compartments.
    if (reason == JS::gcreason::COMPARTMENT_REVIVED) {
        for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
            if (comp->gcState.scheduledForDestruction) {
                return true;
            }
        }

        return false;
    }

    // Otherwise we only collect scheduled zones.
    if (!zone->isGCScheduled()) {
        return false;
    }

    // If canCollectAtoms() is false then either an instance of AutoKeepAtoms is
    // currently on the stack or parsing is currently happening on another
    // thread. In either case we don't have information about which atoms are
    // roots, so we must skip collecting atoms.
    //
    // Note that only affects the first slice of an incremental GC since root
    // marking is completed before we return to the mutator.
    //
    // Off-thread parsing is inhibited after the start of GC which prevents
    // races between creating atoms during parsing and sweeping atoms on the
    // main thread.
    //
    // Otherwise, we always schedule a GC in the atoms zone so that atoms which
    // the other collected zones are using are marked, and we can update the
    // set of atoms in use by the other collected zones at the end of the GC.
    if (zone->isAtomsZone()) {
        return TlsContext.get()->canCollectAtoms();
    }

    return zone->canCollect();
}

bool
GCRuntime::prepareZonesForCollection(JS::gcreason::Reason reason, bool* isFullOut)
{
#ifdef DEBUG
    /* Assert that zone state is as we expect */
    for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        MOZ_ASSERT(!zone->isCollecting());
        MOZ_ASSERT_IF(!zone->isAtomsZone(), !zone->compartments().empty());
        for (auto i : AllAllocKinds()) {
            MOZ_ASSERT(!zone->arenas.arenaListsToSweep(i));
        }
    }
#endif

    *isFullOut = true;
    bool any = false;

    auto currentTime = ReallyNow();

    for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        /* Set up which zones will be collected. */
        if (ShouldCollectZone(zone, reason)) {
            MOZ_ASSERT(zone->canCollect());
            any = true;
            zone->changeGCState(Zone::NoGC, Zone::Mark);
        } else {
            *isFullOut = false;
        }

        zone->setPreservingCode(false);
    }

    // Discard JIT code more aggressively if the process is approaching its
    // executable code limit.
    bool canAllocateMoreCode = jit::CanLikelyAllocateMoreExecutableMemory();

    for (CompartmentsIter c(rt); !c.done(); c.next()) {
        c->gcState.scheduledForDestruction = false;
        c->gcState.maybeAlive = false;
        c->gcState.hasEnteredRealm = false;
        for (RealmsInCompartmentIter r(c); !r.done(); r.next()) {
            r->unmark();
            if (r->shouldTraceGlobal() || !r->zone()->isGCScheduled()) {
                c->gcState.maybeAlive = true;
            }
            if (shouldPreserveJITCode(r, currentTime, reason, canAllocateMoreCode)) {
                r->zone()->setPreservingCode(true);
            }
            if (r->hasBeenEnteredIgnoringJit()) {
                c->gcState.hasEnteredRealm = true;
            }
        }
    }

    if (!cleanUpEverything && canAllocateMoreCode) {
        jit::JitActivationIterator activation(rt->mainContextFromOwnThread());
        if (!activation.done()) {
            activation->compartment()->zone()->setPreservingCode(true);
        }
    }

    /*
     * Check that we do collect the atoms zone if we triggered a GC for that
     * purpose.
     */
    MOZ_ASSERT_IF(reason == JS::gcreason::DELAYED_ATOMS_GC, atomsZone->isGCMarking());

    /* Check that at least one zone is scheduled for collection. */
    return any;
}

static void
DiscardJITCodeForGC(JSRuntime* rt)
{
    js::CancelOffThreadIonCompile(rt, JS::Zone::Mark);
    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::MARK_DISCARD_CODE);
        zone->discardJitCode(rt->defaultFreeOp(),
                             /* discardBaselineCode = */ true,
                             /* releaseTypes = */ true);
    }
}

static void
RelazifyFunctionsForShrinkingGC(JSRuntime* rt)
{
    gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::RELAZIFY_FUNCTIONS);
    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        if (zone->isSelfHostingZone()) {
            continue;
        }
        RelazifyFunctions(zone, AllocKind::FUNCTION);
        RelazifyFunctions(zone, AllocKind::FUNCTION_EXTENDED);
    }
}

static void
PurgeShapeTablesForShrinkingGC(JSRuntime* rt)
{
    gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::PURGE_SHAPE_TABLES);
    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        if (!CanRelocateZone(zone) || zone->keepShapeTables()) {
            continue;
        }
        for (auto baseShape = zone->cellIter<BaseShape>(); !baseShape.done(); baseShape.next()) {
            baseShape->maybePurgeTable();
        }
    }
}

static void
UnmarkCollectedZones(GCParallelTask* task)
{
    JSRuntime* rt = task->runtime();
    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        /* Unmark everything in the zones being collected. */
        zone->arenas.unmarkAll();
    }

    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        /* Unmark all weak maps in the zones being collected. */
        WeakMapBase::unmarkZone(zone);
    }
}

static void
BufferGrayRoots(GCParallelTask* task)
{
    task->runtime()->gc.bufferGrayRoots();
}

bool
GCRuntime::beginMarkPhase(JS::gcreason::Reason reason, AutoGCSession& session)
{
#ifdef DEBUG
    if (fullCompartmentChecks) {
        checkForCompartmentMismatches();
    }
#endif

    if (!prepareZonesForCollection(reason, &isFull.ref())) {
        return false;
    }

    /* * Check it's safe to access the atoms zone if we are collecting it. */
    if (atomsZone->isCollecting()) {
        session.maybeCheckAtomsAccess.emplace(rt);
    }

    /*
     * In an incremental GC, clear the area free lists to ensure that subsequent
     * allocations refill them and end up marking new cells back. See
     * arenaAllocatedDuringGC().
     */
    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        zone->arenas.clearFreeLists();
    }

    marker.start();
    GCMarker* gcmarker = &marker;

    {
        gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::PREPARE);
        AutoLockHelperThreadState helperLock;

        /*
         * Clear all mark state for the zones we are collecting. This is linear
         * in the size of the heap we are collecting and so can be slow. Do this
         * in parallel with the rest of this block.
         */
        AutoRunParallelTask
            unmarkCollectedZones(rt, UnmarkCollectedZones, gcstats::PhaseKind::UNMARK, helperLock);

        /*
         * Buffer gray roots for incremental collections. This is linear in the
         * number of roots which can be in the tens of thousands. Do this in
         * parallel with the rest of this block.
         */
        Maybe<AutoRunParallelTask> bufferGrayRoots;
        if (isIncremental) {
            bufferGrayRoots.emplace(rt, BufferGrayRoots, gcstats::PhaseKind::BUFFER_GRAY_ROOTS, helperLock);
        }
        AutoUnlockHelperThreadState unlock(helperLock);

        // Discard JIT code. For incremental collections, the sweep phase will
        // also discard JIT code.
        DiscardJITCodeForGC(rt);

        /*
         * Relazify functions after discarding JIT code (we can't relazify
         * functions with JIT code) and before the actual mark phase, so that
         * the current GC can collect the JSScripts we're unlinking here.  We do
         * this only when we're performing a shrinking GC, as too much
         * relazification can cause performance issues when we have to reparse
         * the same functions over and over.
         */
        if (invocationKind == GC_SHRINK) {
            RelazifyFunctionsForShrinkingGC(rt);
            PurgeShapeTablesForShrinkingGC(rt);
        }

        /*
         * We must purge the runtime at the beginning of an incremental GC. The
         * danger if we purge later is that the snapshot invariant of
         * incremental GC will be broken, as follows. If some object is
         * reachable only through some cache (say the dtoaCache) then it will
         * not be part of the snapshot.  If we purge after root marking, then
         * the mutator could obtain a pointer to the object and start using
         * it. This object might never be marked, so a GC hazard would exist.
         */
        purgeRuntime();
    }

    /*
     * Mark phase.
     */
    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::MARK);
    traceRuntimeForMajorGC(gcmarker, session);

    if (isIncremental) {
        markCompartments();
    }

    updateMallocCountersOnGCStart();

    /*
     * Process any queued source compressions during the start of a major
     * GC.
     */
    {
        AutoLockHelperThreadState helperLock;
        HelperThreadState().startHandlingCompressionTasks(helperLock);
    }

    return true;
}

void
GCRuntime::markCompartments()
{
    gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::MARK_ROOTS);
    gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::MARK_COMPARTMENTS);

    /*
     * This code ensures that if a compartment is "dead", then it will be
     * collected in this GC. A compartment is considered dead if its maybeAlive
     * flag is false. The maybeAlive flag is set if:
     *
     *   (1) the compartment has been entered (set in beginMarkPhase() above)
     *   (2) the compartment is not being collected (set in beginMarkPhase()
     *       above)
     *   (3) an object in the compartment was marked during root marking, either
     *       as a black root or a gray root (set in RootMarking.cpp), or
     *   (4) the compartment has incoming cross-compartment edges from another
     *       compartment that has maybeAlive set (set by this method).
     *
     * If the maybeAlive is false, then we set the scheduledForDestruction flag.
     * At the end of the GC, we look for compartments where
     * scheduledForDestruction is true. These are compartments that were somehow
     * "revived" during the incremental GC. If any are found, we do a special,
     * non-incremental GC of those compartments to try to collect them.
     *
     * Compartments can be revived for a variety of reasons. On reason is bug
     * 811587, where a reflector that was dead can be revived by DOM code that
     * still refers to the underlying DOM node.
     *
     * Read barriers and allocations can also cause revival. This might happen
     * during a function like JS_TransplantObject, which iterates over all
     * compartments, live or dead, and operates on their objects. See bug 803376
     * for details on this problem. To avoid the problem, we try to avoid
     * allocation and read barriers during JS_TransplantObject and the like.
     */

    /* Propagate the maybeAlive flag via cross-compartment edges. */

    Vector<Compartment*, 0, js::SystemAllocPolicy> workList;

    for (CompartmentsIter comp(rt); !comp.done(); comp.next()) {
        if (comp->gcState.maybeAlive) {
            if (!workList.append(comp)) {
                return;
            }
        }
    }

    while (!workList.empty()) {
        Compartment* comp = workList.popCopy();
        for (Compartment::NonStringWrapperEnum e(comp); !e.empty(); e.popFront()) {
            Compartment* dest = e.front().mutableKey().compartment();
            if (dest && !dest->gcState.maybeAlive) {
                dest->gcState.maybeAlive = true;
                if (!workList.append(dest)) {
                    return;
                }
            }
        }
    }

    /* Set scheduleForDestruction based on maybeAlive. */

    for (GCCompartmentsIter comp(rt); !comp.done(); comp.next()) {
        MOZ_ASSERT(!comp->gcState.scheduledForDestruction);
        if (!comp->gcState.maybeAlive) {
            comp->gcState.scheduledForDestruction = true;
        }
    }
}

void
GCRuntime::updateMallocCountersOnGCStart()
{
    // Update the malloc counters for any zones we are collecting.
    for (GCZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        zone->updateAllGCMallocCountersOnGCStart();
    }

    // Update the runtime malloc counter only if we are doing a full GC.
    if (isFull) {
        mallocCounter.updateOnGCStart();
    }
}

template <class ZoneIterT>
void
GCRuntime::markWeakReferences(gcstats::PhaseKind phase)
{
    MOZ_ASSERT(marker.isDrained());

    gcstats::AutoPhase ap1(stats(), phase);

    marker.enterWeakMarkingMode();

    // TODO bug 1167452: Make weak marking incremental
    drainMarkStack();

    for (;;) {
        bool markedAny = false;
        if (!marker.isWeakMarkingTracer()) {
            for (ZoneIterT zone(rt); !zone.done(); zone.next()) {
                markedAny |= WeakMapBase::markZoneIteratively(zone, &marker);
            }
        }
        markedAny |= Debugger::markIteratively(&marker);
        markedAny |= jit::JitRuntime::MarkJitcodeGlobalTableIteratively(&marker);

        if (!markedAny) {
            break;
        }

        drainMarkStack();
    }
    MOZ_ASSERT(marker.isDrained());

    marker.leaveWeakMarkingMode();
}

void
GCRuntime::markWeakReferencesInCurrentGroup(gcstats::PhaseKind phase)
{
    markWeakReferences<SweepGroupZonesIter>(phase);
}

template <class ZoneIterT>
void
GCRuntime::markGrayReferences(gcstats::PhaseKind phase)
{
    gcstats::AutoPhase ap(stats(), phase);
    if (hasValidGrayRootsBuffer()) {
        for (ZoneIterT zone(rt); !zone.done(); zone.next()) {
            markBufferedGrayRoots(zone);
        }
    } else {
        MOZ_ASSERT(!isIncremental);
        if (JSTraceDataOp op = grayRootTracer.op) {
            (*op)(&marker, grayRootTracer.data);
        }
    }
    drainMarkStack();
}

void
GCRuntime::markGrayReferencesInCurrentGroup(gcstats::PhaseKind phase)
{
    markGrayReferences<SweepGroupZonesIter>(phase);
}

void
GCRuntime::markAllWeakReferences(gcstats::PhaseKind phase)
{
    markWeakReferences<GCZonesIter>(phase);
}

void
GCRuntime::markAllGrayReferences(gcstats::PhaseKind phase)
{
    markGrayReferences<GCZonesIter>(phase);
}

#ifdef JS_GC_ZEAL

struct GCChunkHasher {
    typedef gc::Chunk* Lookup;

    /*
     * Strip zeros for better distribution after multiplying by the golden
     * ratio.
     */
    static HashNumber hash(gc::Chunk* chunk) {
        MOZ_ASSERT(!(uintptr_t(chunk) & gc::ChunkMask));
        return HashNumber(uintptr_t(chunk) >> gc::ChunkShift);
    }

    static bool match(gc::Chunk* k, gc::Chunk* l) {
        MOZ_ASSERT(!(uintptr_t(k) & gc::ChunkMask));
        MOZ_ASSERT(!(uintptr_t(l) & gc::ChunkMask));
        return k == l;
    }
};

class js::gc::MarkingValidator
{
  public:
    explicit MarkingValidator(GCRuntime* gc);
    void nonIncrementalMark(AutoGCSession& session);
    void validate();

  private:
    GCRuntime* gc;
    bool initialized;

    using BitmapMap = HashMap<Chunk*, UniquePtr<ChunkBitmap>, GCChunkHasher, SystemAllocPolicy>;
    BitmapMap map;
};

js::gc::MarkingValidator::MarkingValidator(GCRuntime* gc)
  : gc(gc),
    initialized(false)
{}

void
js::gc::MarkingValidator::nonIncrementalMark(AutoGCSession& session)
{
    /*
     * Perform a non-incremental mark for all collecting zones and record
     * the results for later comparison.
     *
     * Currently this does not validate gray marking.
     */

    JSRuntime* runtime = gc->rt;
    GCMarker* gcmarker = &gc->marker;

    gc->waitBackgroundSweepEnd();

    /* Wait for off-thread parsing which can allocate. */
    HelperThreadState().waitForAllThreads();

    /* Save existing mark bits. */
    {
        AutoLockGC lock(runtime);
        for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
            ChunkBitmap* bitmap = &chunk->bitmap;
            auto entry = MakeUnique<ChunkBitmap>();
            if (!entry) {
                return;
            }

            memcpy((void*)entry->bitmap, (void*)bitmap->bitmap, sizeof(bitmap->bitmap));

            if (!map.putNew(chunk, std::move(entry))) {
                return;
            }
        }
    }

    /*
     * Temporarily clear the weakmaps' mark flags for the compartments we are
     * collecting.
     */

    WeakMapSet markedWeakMaps;

    /*
     * For saving, smush all of the keys into one big table and split them back
     * up into per-zone tables when restoring.
     */
    gc::WeakKeyTable savedWeakKeys(SystemAllocPolicy(), runtime->randomHashCodeScrambler());
    if (!savedWeakKeys.init()) {
        return;
    }

    for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
        if (!WeakMapBase::saveZoneMarkedWeakMaps(zone, markedWeakMaps)) {
            return;
        }

        AutoEnterOOMUnsafeRegion oomUnsafe;
        for (gc::WeakKeyTable::Range r = zone->gcWeakKeys().all(); !r.empty(); r.popFront()) {
            if (!savedWeakKeys.put(std::move(r.front().key), std::move(r.front().value))) {
                oomUnsafe.crash("saving weak keys table for validator");
            }
        }

        if (!zone->gcWeakKeys().clear()) {
            oomUnsafe.crash("clearing weak keys table for validator");
        }
    }

    /*
     * After this point, the function should run to completion, so we shouldn't
     * do anything fallible.
     */
    initialized = true;

    /* Re-do all the marking, but non-incrementally. */
    js::gc::State state = gc->incrementalState;
    gc->incrementalState = State::MarkRoots;

    {
        gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::PREPARE);

        {
            gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::UNMARK);

            for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
                WeakMapBase::unmarkZone(zone);
            }

            MOZ_ASSERT(gcmarker->isDrained());
            gcmarker->reset();

            AutoLockGC lock(runtime);
            for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
                chunk->bitmap.clear();
            }
        }
    }

    {
        gcstats::AutoPhase ap(gc->stats(), gcstats::PhaseKind::MARK);

        gc->traceRuntimeForMajorGC(gcmarker, session);

        gc->incrementalState = State::Mark;
        gc->drainMarkStack();
    }

    gc->incrementalState = State::Sweep;
    {
        gcstats::AutoPhase ap1(gc->stats(), gcstats::PhaseKind::SWEEP);
        gcstats::AutoPhase ap2(gc->stats(), gcstats::PhaseKind::SWEEP_MARK);

        gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_WEAK);

        /* Update zone state for gray marking. */
        for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
            zone->changeGCState(Zone::Mark, Zone::MarkGray);
        }
        gc->marker.setMarkColorGray();

        gc->markAllGrayReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY);
        gc->markAllWeakReferences(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);

        /* Restore zone state. */
        for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
            zone->changeGCState(Zone::MarkGray, Zone::Mark);
        }
        MOZ_ASSERT(gc->marker.isDrained());
        gc->marker.setMarkColorBlack();
    }

    /* Take a copy of the non-incremental mark state and restore the original. */
    {
        AutoLockGC lock(runtime);
        for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
            ChunkBitmap* bitmap = &chunk->bitmap;
            ChunkBitmap* entry = map.lookup(chunk)->value().get();
            Swap(*entry, *bitmap);
        }
    }

    for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
        WeakMapBase::unmarkZone(zone);
        AutoEnterOOMUnsafeRegion oomUnsafe;
        if (!zone->gcWeakKeys().clear()) {
            oomUnsafe.crash("clearing weak keys table for validator");
        }
    }

    WeakMapBase::restoreMarkedWeakMaps(markedWeakMaps);

    for (gc::WeakKeyTable::Range r = savedWeakKeys.all(); !r.empty(); r.popFront()) {
        AutoEnterOOMUnsafeRegion oomUnsafe;
        Zone* zone = gc::TenuredCell::fromPointer(r.front().key.asCell())->zone();
        if (!zone->gcWeakKeys().put(std::move(r.front().key), std::move(r.front().value))) {
            oomUnsafe.crash("restoring weak keys table for validator");
        }
    }

    gc->incrementalState = state;
}

void
js::gc::MarkingValidator::validate()
{
    /*
     * Validates the incremental marking for a single compartment by comparing
     * the mark bits to those previously recorded for a non-incremental mark.
     */

    if (!initialized) {
        return;
    }

    gc->waitBackgroundSweepEnd();

    AutoLockGC lock(gc->rt);
    for (auto chunk = gc->allNonEmptyChunks(lock); !chunk.done(); chunk.next()) {
        BitmapMap::Ptr ptr = map.lookup(chunk);
        if (!ptr) {
            continue;  /* Allocated after we did the non-incremental mark. */
        }

        ChunkBitmap* bitmap = ptr->value().get();
        ChunkBitmap* incBitmap = &chunk->bitmap;

        for (size_t i = 0; i < ArenasPerChunk; i++) {
            if (chunk->decommittedArenas.get(i)) {
                continue;
            }
            Arena* arena = &chunk->arenas[i];
            if (!arena->allocated()) {
                continue;
            }
            if (!arena->zone->isGCSweeping()) {
                continue;
            }

            AllocKind kind = arena->getAllocKind();
            uintptr_t thing = arena->thingsStart();
            uintptr_t end = arena->thingsEnd();
            while (thing < end) {
                auto cell = reinterpret_cast<TenuredCell*>(thing);

                /*
                 * If a non-incremental GC wouldn't have collected a cell, then
                 * an incremental GC won't collect it.
                 */
                if (bitmap->isMarkedAny(cell)) {
                    MOZ_RELEASE_ASSERT(incBitmap->isMarkedAny(cell));
                }

                /*
                 * If the cycle collector isn't allowed to collect an object
                 * after a non-incremental GC has run, then it isn't allowed to
                 * collected it after an incremental GC.
                 */
                if (!bitmap->isMarkedGray(cell)) {
                    MOZ_RELEASE_ASSERT(!incBitmap->isMarkedGray(cell));
                }

                thing += Arena::thingSize(kind);
            }
        }
    }
}

#endif // JS_GC_ZEAL

void
GCRuntime::computeNonIncrementalMarkingForValidation(AutoGCSession& session)
{
#ifdef JS_GC_ZEAL
    MOZ_ASSERT(!markingValidator);
    if (isIncremental && hasZealMode(ZealMode::IncrementalMarkingValidator)) {
        markingValidator = js_new<MarkingValidator>(this);
    }
    if (markingValidator) {
        markingValidator->nonIncrementalMark(session);
    }
#endif
}

void
GCRuntime::validateIncrementalMarking()
{
#ifdef JS_GC_ZEAL
    if (markingValidator) {
        markingValidator->validate();
    }
#endif
}

void
GCRuntime::finishMarkingValidation()
{
#ifdef JS_GC_ZEAL
    js_delete(markingValidator.ref());
    markingValidator = nullptr;
#endif
}

static void
DropStringWrappers(JSRuntime* rt)
{
    /*
     * String "wrappers" are dropped on GC because their presence would require
     * us to sweep the wrappers in all compartments every time we sweep a
     * compartment group.
     */
    for (CompartmentsIter c(rt); !c.done(); c.next()) {
        for (Compartment::StringWrapperEnum e(c); !e.empty(); e.popFront()) {
            MOZ_ASSERT(e.front().key().is<JSString*>());
            e.removeFront();
        }
    }
}

/*
 * Group zones that must be swept at the same time.
 *
 * If compartment A has an edge to an unmarked object in compartment B, then we
 * must not sweep A in a later slice than we sweep B. That's because a write
 * barrier in A could lead to the unmarked object in B becoming marked.
 * However, if we had already swept that object, we would be in trouble.
 *
 * If we consider these dependencies as a graph, then all the compartments in
 * any strongly-connected component of this graph must be swept in the same
 * slice.
 *
 * Tarjan's algorithm is used to calculate the components.
 */
namespace {
struct AddOutgoingEdgeFunctor {
    bool needsEdge_;
    ZoneComponentFinder& finder_;

    AddOutgoingEdgeFunctor(bool needsEdge, ZoneComponentFinder& finder)
      : needsEdge_(needsEdge), finder_(finder)
    {}

    template <typename T>
    void operator()(T tp) {
        TenuredCell& other = (*tp)->asTenured();

        /*
         * Add edge to wrapped object compartment if wrapped object is not
         * marked black to indicate that wrapper compartment not be swept
         * after wrapped compartment.
         */
        if (needsEdge_) {
            JS::Zone* zone = other.zone();
            if (zone->isGCMarking()) {
                finder_.addEdgeTo(zone);
            }
        }
    }
};
} // namespace (anonymous)

void
Compartment::findOutgoingEdges(ZoneComponentFinder& finder)
{
    for (js::WrapperMap::Enum e(crossCompartmentWrappers); !e.empty(); e.popFront()) {
        CrossCompartmentKey& key = e.front().mutableKey();
        MOZ_ASSERT(!key.is<JSString*>());
        bool needsEdge = true;
        if (key.is<JSObject*>()) {
            TenuredCell& other = key.as<JSObject*>()->asTenured();
            needsEdge = !other.isMarkedBlack();
        }
        key.applyToWrapped(AddOutgoingEdgeFunctor(needsEdge, finder));
    }
}

void
Zone::findOutgoingEdges(ZoneComponentFinder& finder)
{
    /*
     * Any compartment may have a pointer to an atom in the atoms
     * compartment, and these aren't in the cross compartment map.
     */
    if (Zone* zone = finder.maybeAtomsZone) {
        MOZ_ASSERT(zone->isCollecting());
        finder.addEdgeTo(zone);
    }

    for (CompartmentsInZoneIter comp(this); !comp.done(); comp.next()) {
        comp->findOutgoingEdges(finder);
    }

    for (ZoneSet::Range r = gcSweepGroupEdges().all(); !r.empty(); r.popFront()) {
        if (r.front()->isGCMarking()) {
            finder.addEdgeTo(r.front());
        }
    }

    Debugger::findZoneEdges(this, finder);
}

bool
GCRuntime::findInterZoneEdges()
{
    /*
     * Weakmaps which have keys with delegates in a different zone introduce the
     * need for zone edges from the delegate's zone to the weakmap zone.
     *
     * Since the edges point into and not away from the zone the weakmap is in
     * we must find these edges in advance and store them in a set on the Zone.
     * If we run out of memory, we fall back to sweeping everything in one
     * group.
     */

    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        if (!WeakMapBase::findInterZoneEdges(zone)) {
            return false;
        }
    }

    return true;
}

void
GCRuntime::groupZonesForSweeping(JS::gcreason::Reason reason)
{
#ifdef DEBUG
    for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        MOZ_ASSERT(zone->gcSweepGroupEdges().empty());
    }
#endif

    JSContext* cx = rt->mainContextFromOwnThread();
    Zone* maybeAtomsZone = atomsZone->wasGCStarted() ? atomsZone.ref() : nullptr;
    ZoneComponentFinder finder(cx->nativeStackLimit[JS::StackForSystemCode], maybeAtomsZone);
    if (!isIncremental || !findInterZoneEdges()) {
        finder.useOneComponent();
    }

#ifdef JS_GC_ZEAL
    // Use one component for two-slice zeal modes.
    if (useZeal && hasIncrementalTwoSliceZealMode()) {
        finder.useOneComponent();
    }
#endif

    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        MOZ_ASSERT(zone->isGCMarking());
        finder.addNode(zone);
    }
    sweepGroups = finder.getResultsList();
    currentSweepGroup = sweepGroups;
    sweepGroupIndex = 0;

    for (GCZonesIter zone(rt); !zone.done(); zone.next()) {
        zone->gcSweepGroupEdges().clear();
    }

#ifdef DEBUG
    for (Zone* head = currentSweepGroup; head; head = head->nextGroup()) {
        for (Zone* zone = head; zone; zone = zone->nextNodeInGroup()) {
            MOZ_ASSERT(zone->isGCMarking());
        }
    }

    MOZ_ASSERT_IF(!isIncremental, !currentSweepGroup->nextGroup());
    for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) {
        MOZ_ASSERT(zone->gcSweepGroupEdges().empty());
    }
#endif
}

static void
ResetGrayList(Compartment* comp);

void
GCRuntime::getNextSweepGroup()
{
    currentSweepGroup = currentSweepGroup->nextGroup();
    ++sweepGroupIndex;
    if (!currentSweepGroup) {
        abortSweepAfterCurrentGroup = false;
        return;
    }

    for (Zone* zone = currentSweepGroup; zone; zone = zone->nextNodeInGroup()) {
        MOZ_ASSERT(zone->isGCMarking());
        MOZ_ASSERT(!zone->isQueuedForBackgroundSweep());
    }

    if (!isIncremental) {
        ZoneComponentFinder::mergeGroups(currentSweepGroup);
    }

    if (abortSweepAfterCurrentGroup) {
        MOZ_ASSERT(!isIncremental);
        for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
            MOZ_ASSERT(!zone->gcNextGraphComponent);
            zone->setNeedsIncrementalBarrier(false);
            zone->changeGCState(Zone::Mark, Zone::NoGC);
            zone->gcGrayRoots().clearAndFree();
        }

        for (SweepGroupCompartmentsIter comp(rt); !comp.done(); comp.next()) {
            ResetGrayList(comp);
        }

        abortSweepAfterCurrentGroup = false;
        currentSweepGroup = nullptr;
    }
}

/*
 * Gray marking:
 *
 * At the end of collection, anything reachable from a gray root that has not
 * otherwise been marked black must be marked gray.
 *
 * This means that when marking things gray we must not allow marking to leave
 * the current compartment group, as that could result in things being marked
 * grey when they might subsequently be marked black.  To achieve this, when we
 * find a cross compartment pointer we don't mark the referent but add it to a
 * singly-linked list of incoming gray pointers that is stored with each
 * compartment.
 *
 * The list head is stored in Compartment::gcIncomingGrayPointers and contains
 * cross compartment wrapper objects. The next pointer is stored in the second
 * extra slot of the cross compartment wrapper.
 *
 * The list is created during gray marking when one of the
 * MarkCrossCompartmentXXX functions is called for a pointer that leaves the
 * current compartent group.  This calls DelayCrossCompartmentGrayMarking to
 * push the referring object onto the list.
 *
 * The list is traversed and then unlinked in
 * GCRuntime::markIncomingCrossCompartmentPointers.
 */

static bool
IsGrayListObject(JSObject* obj)
{
    MOZ_ASSERT(obj);
    return obj->is<CrossCompartmentWrapperObject>() && !IsDeadProxyObject(obj);
}

/* static */ unsigned
ProxyObject::grayLinkReservedSlot(JSObject* obj)
{
    MOZ_ASSERT(IsGrayListObject(obj));
    return CrossCompartmentWrapperObject::GrayLinkReservedSlot;
}

#ifdef DEBUG
static void
AssertNotOnGrayList(JSObject* obj)
{
    MOZ_ASSERT_IF(IsGrayListObject(obj),
                  GetProxyReservedSlot(obj, ProxyObject::grayLinkReservedSlot(obj)).isUndefined());
}
#endif

static void
AssertNoWrappersInGrayList(JSRuntime* rt)
{
#ifdef DEBUG
    for (CompartmentsIter c(rt); !c.done(); c.next()) {
        MOZ_ASSERT(!c->gcIncomingGrayPointers);
        for (Compartment::NonStringWrapperEnum e(c); !e.empty(); e.popFront()) {
            AssertNotOnGrayList(&e.front().value().unbarrieredGet().toObject());
        }
    }
#endif
}

static JSObject*
CrossCompartmentPointerReferent(JSObject* obj)
{
    MOZ_ASSERT(IsGrayListObject(obj));
    return &obj->as<ProxyObject>().private_().toObject();
}

static JSObject*
NextIncomingCrossCompartmentPointer(JSObject* prev, bool unlink)
{
    unsigned slot = ProxyObject::grayLinkReservedSlot(prev);
    JSObject* next = GetProxyReservedSlot(prev, slot).toObjectOrNull();
    MOZ_ASSERT_IF(next, IsGrayListObject(next));

    if (unlink) {
        SetProxyReservedSlot(prev, slot, UndefinedValue());
    }

    return next;
}

void
js::gc::DelayCrossCompartmentGrayMarking(JSObject* src)
{
    MOZ_ASSERT(IsGrayListObject(src));
    MOZ_ASSERT(src->isMarkedGray());

    AutoTouchingGrayThings tgt;

    /* Called from MarkCrossCompartmentXXX functions. */
    unsigned slot = ProxyObject::grayLinkReservedSlot(src);
    JSObject* dest = CrossCompartmentPointerReferent(src);
    Compartment* comp = dest->compartment();

    if (GetProxyReservedSlot(src, slot).isUndefined()) {
        SetProxyReservedSlot(src, slot, ObjectOrNullValue(comp->gcIncomingGrayPointers));
        comp->gcIncomingGrayPointers = src;
    } else {
        MOZ_ASSERT(GetProxyReservedSlot(src, slot).isObjectOrNull());
    }

#ifdef DEBUG
    /*
     * Assert that the object is in our list, also walking the list to check its
     * integrity.
     */
    JSObject* obj = comp->gcIncomingGrayPointers;
    bool found = false;
    while (obj) {
        if (obj == src) {
            found = true;
        }
        obj = NextIncomingCrossCompartmentPointer(obj, false);
    }
    MOZ_ASSERT(found);
#endif
}

void
GCRuntime::markIncomingCrossCompartmentPointers(MarkColor color)
{
    MOZ_ASSERT(color == MarkColor::Black || color == MarkColor::Gray);

    static const gcstats::PhaseKind statsPhases[] = {
        gcstats::PhaseKind::SWEEP_MARK_INCOMING_BLACK,
        gcstats::PhaseKind::SWEEP_MARK_INCOMING_GRAY
    };
    gcstats::AutoPhase ap1(stats(), statsPhases[unsigned(color)]);

    bool unlinkList = color == MarkColor::Gray;

    for (SweepGroupCompartmentsIter c(rt); !c.done(); c.next()) {
        MOZ_ASSERT_IF(color == MarkColor::Gray, c->zone()->isGCMarkingGray());
        MOZ_ASSERT_IF(color == MarkColor::Black, c->zone()->isGCMarkingBlack());
        MOZ_ASSERT_IF(c->gcIncomingGrayPointers, IsGrayListObject(c->gcIncomingGrayPointers));

        for (JSObject* src = c->gcIncomingGrayPointers;
             src;
             src = NextIncomingCrossCompartmentPointer(src, unlinkList))
        {
            JSObject* dst = CrossCompartmentPointerReferent(src);
            MOZ_ASSERT(dst->compartment() == c);

            if (color == MarkColor::Gray) {
                if (IsMarkedUnbarriered(rt, &src) && src->asTenured().isMarkedGray()) {
                    TraceManuallyBarrieredEdge(&marker, &dst,
                                               "cross-compartment gray pointer");
                }
            } else {
                if (IsMarkedUnbarriered(rt, &src) && !src->asTenured().isMarkedGray()) {
                    TraceManuallyBarrieredEdge(&marker, &dst,
                                               "cross-compartment black pointer");
                }
            }
        }

        if (unlinkList) {
            c->gcIncomingGrayPointers = nullptr;
        }
    }

    drainMarkStack();
}

static bool
RemoveFromGrayList(JSObject* wrapper)
{
    AutoTouchingGrayThings tgt;

    if (!IsGrayListObject(wrapper)) {
        return false;
    }

    unsigned slot = ProxyObject::grayLinkReservedSlot(wrapper);
    if (GetProxyReservedSlot(wrapper, slot).isUndefined()) {
        return false;  /* Not on our list. */
    }

    JSObject* tail = GetProxyReservedSlot(wrapper, slot).toObjectOrNull();
    SetProxyReservedSlot(wrapper, slot, UndefinedValue());

    Compartment* comp = CrossCompartmentPointerReferent(wrapper)->compartment();
    JSObject* obj = comp->gcIncomingGrayPointers;
    if (obj == wrapper) {
        comp->gcIncomingGrayPointers = tail;
        return true;
    }

    while (obj) {
        unsigned slot = ProxyObject::grayLinkReservedSlot(obj);
        JSObject* next = GetProxyReservedSlot(obj, slot).toObjectOrNull();
        if (next == wrapper) {
            js::detail::SetProxyReservedSlotUnchecked(obj, slot, ObjectOrNullValue(tail));
            return true;
        }
        obj = next;
    }

    MOZ_CRASH("object not found in gray link list");
}

static void
ResetGrayList(Compartment* comp)
{
    JSObject* src = comp->gcIncomingGrayPointers;
    while (src) {
        src = NextIncomingCrossCompartmentPointer(src, true);
    }
    comp->gcIncomingGrayPointers = nullptr;
}

void
js::NotifyGCNukeWrapper(JSObject* obj)
{
    /*
     * References to target of wrapper are being removed, we no longer have to
     * remember to mark it.
     */
    RemoveFromGrayList(obj);
}

enum {
    JS_GC_SWAP_OBJECT_A_REMOVED = 1 << 0,
    JS_GC_SWAP_OBJECT_B_REMOVED = 1 << 1
};

unsigned
js::NotifyGCPreSwap(JSObject* a, JSObject* b)
{
    /*
     * Two objects in the same compartment are about to have had their contents
     * swapped.  If either of them are in our gray pointer list, then we remove
     * them from the lists, returning a bitset indicating what happened.
     */
    return (RemoveFromGrayList(a) ? JS_GC_SWAP_OBJECT_A_REMOVED : 0) |
           (RemoveFromGrayList(b) ? JS_GC_SWAP_OBJECT_B_REMOVED : 0);
}

void
js::NotifyGCPostSwap(JSObject* a, JSObject* b, unsigned removedFlags)
{
    /*
     * Two objects in the same compartment have had their contents swapped.  If
     * either of them were in our gray pointer list, we re-add them again.
     */
    if (removedFlags & JS_GC_SWAP_OBJECT_A_REMOVED) {
        DelayCrossCompartmentGrayMarking(b);
    }
    if (removedFlags & JS_GC_SWAP_OBJECT_B_REMOVED) {
        DelayCrossCompartmentGrayMarking(a);
    }
}

IncrementalProgress
GCRuntime::endMarkingSweepGroup(FreeOp* fop, SliceBudget& budget)
{
    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_MARK);

    // Mark any incoming black pointers from previously swept compartments
    // whose referents are not marked. This can occur when gray cells become
    // black by the action of UnmarkGray.
    markIncomingCrossCompartmentPointers(MarkColor::Black);
    markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_WEAK);

    // Change state of current group to MarkGray to restrict marking to this
    // group.  Note that there may be pointers to the atoms zone, and
    // these will be marked through, as they are not marked with
    // TraceCrossCompartmentEdge.
    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
        zone->changeGCState(Zone::Mark, Zone::MarkGray);
    }
    marker.setMarkColorGray();

    // Mark incoming gray pointers from previously swept compartments.
    markIncomingCrossCompartmentPointers(MarkColor::Gray);

    // Mark gray roots and mark transitively inside the current compartment
    // group.
    markGrayReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY);
    markWeakReferencesInCurrentGroup(gcstats::PhaseKind::SWEEP_MARK_GRAY_WEAK);

    // Restore marking state.
    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
        zone->changeGCState(Zone::MarkGray, Zone::Mark);
    }
    MOZ_ASSERT(marker.isDrained());
    marker.setMarkColorBlack();

    // We must not yield after this point before we start sweeping the group.
    safeToYield = false;

    return Finished;
}

// Causes the given WeakCache to be swept when run.
class ImmediateSweepWeakCacheTask : public GCParallelTaskHelper<ImmediateSweepWeakCacheTask>
{
    JS::detail::WeakCacheBase& cache;

    ImmediateSweepWeakCacheTask(const ImmediateSweepWeakCacheTask&) = delete;

  public:
    ImmediateSweepWeakCacheTask(JSRuntime* rt, JS::detail::WeakCacheBase& wc)
      : GCParallelTaskHelper(rt), cache(wc)
    {}

    ImmediateSweepWeakCacheTask(ImmediateSweepWeakCacheTask&& other)
      : GCParallelTaskHelper(std::move(other)), cache(other.cache)
    {}

    void run() {
        cache.sweep();
    }
};

static void
UpdateAtomsBitmap(JSRuntime* runtime)
{
    DenseBitmap marked;
    if (runtime->gc.atomMarking.computeBitmapFromChunkMarkBits(runtime, marked)) {
        for (GCZonesIter zone(runtime); !zone.done(); zone.next()) {
            runtime->gc.atomMarking.refineZoneBitmapForCollectedZone(zone, marked);
        }
    } else {
        // Ignore OOM in computeBitmapFromChunkMarkBits. The
        // refineZoneBitmapForCollectedZone call can only remove atoms from the
        // zone bitmap, so it is conservative to just not call it.
    }

    runtime->gc.atomMarking.markAtomsUsedByUncollectedZones(runtime);

    // For convenience sweep these tables non-incrementally as part of bitmap
    // sweeping; they are likely to be much smaller than the main atoms table.
    runtime->symbolRegistry().sweep();
    for (RealmsIter realm(runtime); !realm.done(); realm.next()) {
        realm->sweepVarNames();
    }
}

static void
SweepCCWrappers(GCParallelTask* task)
{
    JSRuntime* runtime = task->runtime();
    for (SweepGroupCompartmentsIter c(runtime); !c.done(); c.next()) {
        c->sweepCrossCompartmentWrappers();
    }
}

static void
SweepObjectGroups(GCParallelTask* task)
{
    JSRuntime* runtime = task->runtime();
    for (SweepGroupRealmsIter r(runtime); !r.done(); r.next()) {
        r->sweepObjectGroups();
    }
}

static void
SweepMisc(GCParallelTask* task)
{
    JSRuntime* runtime = task->runtime();
    for (SweepGroupRealmsIter r(runtime); !r.done(); r.next()) {
        r->sweepGlobalObject();
        r->sweepTemplateObjects();
        r->sweepSavedStacks();
        r->sweepSelfHostingScriptSource();
        r->sweepObjectRealm();
        r->sweepRegExps();
    }
}

static void
SweepCompressionTasks(GCParallelTask* task)
{
    JSRuntime* runtime = task->runtime();

    AutoLockHelperThreadState lock;

    // Attach finished compression tasks.
    auto& finished = HelperThreadState().compressionFinishedList(lock);
    for (size_t i = 0; i < finished.length(); i++) {
        if (finished[i]->runtimeMatches(runtime)) {
            UniquePtr<SourceCompressionTask> compressionTask(std::move(finished[i]));
            HelperThreadState().remove(finished, &i);
            compressionTask->complete();
        }
    }

    // Sweep pending tasks that are holding onto should-be-dead ScriptSources.
    auto& pending = HelperThreadState().compressionPendingList(lock);
    for (size_t i = 0; i < pending.length(); i++) {
        if (pending[i]->shouldCancel()) {
            HelperThreadState().remove(pending, &i);
        }
    }
}

static void
SweepWeakMaps(GCParallelTask* task)
{
    JSRuntime* runtime = task->runtime();
    for (SweepGroupZonesIter zone(runtime); !zone.done(); zone.next()) {
        /* Clear all weakrefs that point to unmarked things. */
        for (auto edge : zone->gcWeakRefs()) {
            /* Edges may be present multiple times, so may already be nulled. */
            if (*edge && IsAboutToBeFinalizedDuringSweep(**edge)) {
                *edge = nullptr;
            }
        }
        zone->gcWeakRefs().clear();

        /* No need to look up any more weakmap keys from this sweep group. */
        AutoEnterOOMUnsafeRegion oomUnsafe;
        if (!zone->gcWeakKeys().clear()) {
            oomUnsafe.crash("clearing weak keys in beginSweepingSweepGroup()");
        }

        zone->sweepWeakMaps();
    }
}

static void
SweepUniqueIds(GCParallelTask* task)
{
    for (SweepGroupZonesIter zone(task->runtime()); !zone.done(); zone.next()) {
        zone->sweepUniqueIds();
    }
}

void
GCRuntime::startTask(GCParallelTask& task, gcstats::PhaseKind phase,
                     AutoLockHelperThreadState& locked)
{
    if (!task.startWithLockHeld(locked)) {
        AutoUnlockHelperThreadState unlock(locked);
        gcstats::AutoPhase ap(stats(), phase);
        task.runFromMainThread(rt);
    }
}

void
GCRuntime::joinTask(GCParallelTask& task, gcstats::PhaseKind phase,
                    AutoLockHelperThreadState& locked)
{
    {
        gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::JOIN_PARALLEL_TASKS);
        task.joinWithLockHeld(locked);
    }
    stats().recordParallelPhase(phase, task.duration());
}

void
GCRuntime::sweepDebuggerOnMainThread(FreeOp* fop)
{
    // Detach unreachable debuggers and global objects from each other.
    // This can modify weakmaps and so must happen before weakmap sweeping.
    Debugger::sweepAll(fop);

    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);

    // Sweep debug environment information. This performs lookups in the Zone's
    // unique IDs table and so must not happen in parallel with sweeping that
    // table.
    {
        gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_MISC);
        for (SweepGroupRealmsIter r(rt); !r.done(); r.next()) {
            r->sweepDebugEnvironments();
        }
    }

    // Sweep breakpoints. This is done here to be with the other debug sweeping,
    // although note that it can cause JIT code to be patched.
    {
        gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_BREAKPOINT);
        for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
            zone->sweepBreakpoints(fop);
        }
    }
}

void
GCRuntime::sweepJitDataOnMainThread(FreeOp* fop)
{
    {
        gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_JIT_DATA);

        if (initialState != State::NotActive) {
            // Cancel any active or pending off thread compilations. We also did
            // this before marking (in DiscardJITCodeForGC) so this is a no-op
            // for non-incremental GCs.
            js::CancelOffThreadIonCompile(rt, JS::Zone::Sweep);
        }

        // Bug 1071218: the following method has not yet been refactored to
        // work on a single zone-group at once.

        // Sweep entries containing about-to-be-finalized JitCode and
        // update relocated TypeSet::Types inside the JitcodeGlobalTable.
        jit::JitRuntime::SweepJitcodeGlobalTable(rt);
    }

    if (initialState != State::NotActive) {
        gcstats::AutoPhase apdc(stats(), gcstats::PhaseKind::SWEEP_DISCARD_CODE);
        for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
            zone->discardJitCode(fop);
        }
    }

    // JitZone/JitRealm must be swept *after* discarding JIT code, because
    // Zone::discardJitCode might access CacheIRStubInfos deleted here.
    {
        gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP_JIT_DATA);

        for (SweepGroupRealmsIter r(rt); !r.done(); r.next()) {
            r->sweepJitRealm();
        }

        for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
            if (jit::JitZone* jitZone = zone->jitZone()) {
                jitZone->sweep();
            }
        }
    }

    {
        gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP_TYPES);
        gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_TYPES_BEGIN);
        for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
            zone->beginSweepTypes();
        }
    }
}

using WeakCacheTaskVector = mozilla::Vector<ImmediateSweepWeakCacheTask, 0, SystemAllocPolicy>;

enum WeakCacheLocation
{
    RuntimeWeakCache,
    ZoneWeakCache
};

// Call a functor for all weak caches that need to be swept in the current
// sweep group.
template <typename Functor>
static inline bool
IterateWeakCaches(JSRuntime* rt, Functor f)
{
    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
        for (JS::detail::WeakCacheBase* cache : zone->weakCaches()) {
            if (!f(cache, ZoneWeakCache)) {
                return false;
            }
        }
    }

    for (JS::detail::WeakCacheBase* cache : rt->weakCaches()) {
        if (!f(cache, RuntimeWeakCache)) {
            return false;
        }
    }

    return true;
}

static bool
PrepareWeakCacheTasks(JSRuntime* rt, WeakCacheTaskVector* immediateTasks)
{
    // Start incremental sweeping for caches that support it or add to a vector
    // of sweep tasks to run on a helper thread.

    MOZ_ASSERT(immediateTasks->empty());

    bool ok = IterateWeakCaches(rt, [&] (JS::detail::WeakCacheBase* cache,
                                         WeakCacheLocation location)
    {
        if (!cache->needsSweep()) {
            return true;
        }

        // Caches that support incremental sweeping will be swept later.
        if (location == ZoneWeakCache && cache->setNeedsIncrementalBarrier(true)) {
            return true;
        }

        return immediateTasks->emplaceBack(rt, *cache);
    });

    if (!ok) {
        immediateTasks->clearAndFree();
    }

    return ok;
}

static void
SweepWeakCachesOnMainThread(JSRuntime* rt)
{
    // If we ran out of memory, do all the work on the main thread.
    gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::SWEEP_WEAK_CACHES);
    IterateWeakCaches(rt, [&] (JS::detail::WeakCacheBase* cache, WeakCacheLocation location) {
        if (cache->needsIncrementalBarrier()) {
            cache->setNeedsIncrementalBarrier(false);
        }
        cache->sweep();
        return true;
    });
}

IncrementalProgress
GCRuntime::beginSweepingSweepGroup(FreeOp* fop, SliceBudget& budget)
{
    /*
     * Begin sweeping the group of zones in currentSweepGroup, performing
     * actions that must be done before yielding to caller.
     */

    using namespace gcstats;

    AutoSCC scc(stats(), sweepGroupIndex);

    bool sweepingAtoms = false;
    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
        /* Set the GC state to sweeping. */
        zone->changeGCState(Zone::Mark, Zone::Sweep);

        /* Purge the ArenaLists before sweeping. */
        zone->arenas.unmarkPreMarkedFreeCells();
        zone->arenas.clearFreeLists();

        if (zone->isAtomsZone()) {
            sweepingAtoms = true;
        }

#ifdef DEBUG
        zone->gcLastSweepGroupIndex = sweepGroupIndex;
#endif
    }

    validateIncrementalMarking();

    {
        AutoPhase ap(stats(), PhaseKind::FINALIZE_START);
        callFinalizeCallbacks(fop, JSFINALIZE_GROUP_PREPARE);
        {
            AutoPhase ap2(stats(), PhaseKind::WEAK_ZONES_CALLBACK);
            callWeakPointerZonesCallbacks();
        }
        {
            AutoPhase ap2(stats(), PhaseKind::WEAK_COMPARTMENT_CALLBACK);
            for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
                for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) {
                    callWeakPointerCompartmentCallbacks(comp);
                }
            }
        }
        callFinalizeCallbacks(fop, JSFINALIZE_GROUP_START);
    }

    // Updating the atom marking bitmaps. This marks atoms referenced by
    // uncollected zones so cannot be done in parallel with the other sweeping
    // work below.
    if (sweepingAtoms) {
        AutoPhase ap(stats(), PhaseKind::UPDATE_ATOMS_BITMAP);
        UpdateAtomsBitmap(rt);
    }

    sweepDebuggerOnMainThread(fop);

    {
        AutoLockHelperThreadState lock;

        AutoPhase ap(stats(), PhaseKind::SWEEP_COMPARTMENTS);

        AutoRunParallelTask sweepCCWrappers(rt, SweepCCWrappers, PhaseKind::SWEEP_CC_WRAPPER, lock);
        AutoRunParallelTask sweepObjectGroups(rt, SweepObjectGroups, PhaseKind::SWEEP_TYPE_OBJECT, lock);
        AutoRunParallelTask sweepMisc(rt, SweepMisc, PhaseKind::SWEEP_MISC, lock);
        AutoRunParallelTask sweepCompTasks(rt, SweepCompressionTasks, PhaseKind::SWEEP_COMPRESSION, lock);
        AutoRunParallelTask sweepWeakMaps(rt, SweepWeakMaps, PhaseKind::SWEEP_WEAKMAPS, lock);
        AutoRunParallelTask sweepUniqueIds(rt, SweepUniqueIds, PhaseKind::SWEEP_UNIQUEIDS, lock);

        WeakCacheTaskVector sweepCacheTasks;
        if (!PrepareWeakCacheTasks(rt, &sweepCacheTasks)) {
            SweepWeakCachesOnMainThread(rt);
        }

        for (auto& task : sweepCacheTasks) {
            startTask(task, PhaseKind::SWEEP_WEAK_CACHES, lock);
        }

        {
            AutoUnlockHelperThreadState unlock(lock);
            sweepJitDataOnMainThread(fop);
        }

        for (auto& task : sweepCacheTasks) {
            joinTask(task, PhaseKind::SWEEP_WEAK_CACHES, lock);
        }
    }

    if (sweepingAtoms) {
        startSweepingAtomsTable();
    }

    // Queue all GC things in all zones for sweeping, either on the foreground
    // or on the background thread.

    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {

        zone->arenas.queueForForegroundSweep(fop, ForegroundObjectFinalizePhase);
        zone->arenas.queueForForegroundSweep(fop, ForegroundNonObjectFinalizePhase);
        for (unsigned i = 0; i < ArrayLength(BackgroundFinalizePhases); ++i) {
            zone->arenas.queueForBackgroundSweep(fop, BackgroundFinalizePhases[i]);
        }

        zone->arenas.queueForegroundThingsForSweep();
    }

    sweepCache = nullptr;
    safeToYield = true;

    return Finished;
}

#ifdef JS_GC_ZEAL
bool
GCRuntime::shouldYieldForZeal(ZealMode mode)
{
    bool yield = useZeal && isIncremental && hasZealMode(mode);

    // Only yield on the first sweep slice for this mode.
    bool firstSweepSlice = initialState != State::Sweep;
    if (mode == ZealMode::IncrementalMultipleSlices && !firstSweepSlice) {
        yield = false;
    }

    return yield;
}
#endif

IncrementalProgress
GCRuntime::endSweepingSweepGroup(FreeOp* fop, SliceBudget& budget)
{
    {
        gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::FINALIZE_END);
        FreeOp fop(rt);
        callFinalizeCallbacks(&fop, JSFINALIZE_GROUP_END);
    }

    /* Update the GC state for zones we have swept. */
    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
        AutoLockGC lock(rt);
        zone->changeGCState(Zone::Sweep, Zone::Finished);
        zone->threshold.updateAfterGC(zone->usage.gcBytes(), invocationKind, tunables,
                                      schedulingState, lock);
        zone->updateAllGCMallocCountersOnGCEnd(lock);
        zone->arenas.unmarkPreMarkedFreeCells();
    }

    /*
     * Start background thread to sweep zones if required, sweeping the atoms
     * zone last if present.
     */
    bool sweepAtomsZone = false;
    ZoneList zones;
    for (SweepGroupZonesIter zone(rt); !zone.done(); zone.next()) {
        if (zone->isAtomsZone()) {
            sweepAtomsZone = true;
        } else {
            zones.append(zone);
        }
    }
    if (sweepAtomsZone) {
        zones.append(atomsZone);
    }

    queueZonesForBackgroundSweep(zones);

    if (!sweepOnBackgroundThread) {
        sweepTask.runFromMainThread(rt);
    }

    return Finished;
}

void
GCRuntime::beginSweepPhase(JS::gcreason::Reason reason, AutoGCSession& session)
{
    /*
     * Sweep phase.
     *
     * Finalize as we sweep, outside of lock but with RuntimeHeapIsBusy()
     * true so that any attempt to allocate a GC-thing from a finalizer will
     * fail, rather than nest badly and leave the unmarked newborn to be swept.
     */

    MOZ_ASSERT(!abortSweepAfterCurrentGroup);

    AutoSetThreadIsSweeping threadIsSweeping;

    releaseHeldRelocatedArenas();

    computeNonIncrementalMarkingForValidation(session);

    gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::SWEEP);

    sweepOnBackgroundThread =
        reason != JS::gcreason::DESTROY_RUNTIME &&
        !gcTracer.traceEnabled() &&
        CanUseExtraThreads();

    AssertNoWrappersInGrayList(rt);
    DropStringWrappers(rt);

    groupZonesForSweeping(reason);

    sweepActions->assertFinished();

    // We must not yield after this point until we start sweeping the first sweep
    // group.
    safeToYield = false;
}

bool
ArenaLists::foregroundFinalize(FreeOp* fop, AllocKind thingKind, SliceBudget& sliceBudget,
                               SortedArenaList& sweepList)
{
    if (!arenaListsToSweep(thingKind) && incrementalSweptArenas.ref().isEmpty()) {
        return true;
    }

    // Empty object arenas are not released until all foreground GC things have
    // been swept.
    KeepArenasEnum keepArenas = IsObjectAllocKind(thingKind) ? KEEP_ARENAS : RELEASE_ARENAS;

    if (!FinalizeArenas(fop, &arenaListsToSweep(thingKind), sweepList,
                        thingKind, sliceBudget, keepArenas))
    {
        incrementalSweptArenaKind = thingKind;
        incrementalSweptArenas = sweepList.toArenaList();
        return false;
    }

    // Clear any previous incremental sweep state we may have saved.
    incrementalSweptArenas.ref().clear();

    if (IsObjectAllocKind(thingKind)) {
      sweepList.extractEmpty(&savedEmptyArenas.ref());
    }

    ArenaList finalized = sweepList.toArenaList();
    arenaLists(thingKind) = finalized.insertListWithCursorAtEnd(arenaLists(thingKind));

    return true;
}

IncrementalProgress
GCRuntime::markUntilBudgetExhaused(SliceBudget& sliceBudget, gcstats::PhaseKind phase)
{
    // Marked GC things may vary between recording and replaying, so marking
    // and sweeping should not perform any recorded events.
    mozilla::recordreplay::AutoDisallowThreadEvents disallow;

    /* Run a marking slice and return whether the stack is now empty. */
    gcstats::AutoPhase ap(stats(), phase);
    return marker.markUntilBudgetExhaused(sliceBudget) ? Finished : NotFinished;
}

void
GCRuntime::drainMarkStack()
{
    auto unlimited = SliceBudget::unlimited();
    MOZ_RELEASE_ASSERT(marker.markUntilBudgetExhaused(unlimited));
}

static void
SweepThing(Shape* shape)
{
    if (!shape->isMarkedAny()) {
        shape->sweep();
    }
}

static void
SweepThing(JSScript* script)
{
    AutoSweepTypeScript sweep(script);
}

static void
SweepThing(ObjectGroup* group)
{
    AutoSweepObjectGroup sweep(group);
}

template <typename T>
static bool
SweepArenaList(Arena** arenasToSweep, SliceBudget& sliceBudget)
{
    while (Arena* arena = *arenasToSweep) {
        for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) {
            SweepThing(i.get<T>());
        }

        *arenasToSweep = (*arenasToSweep)->next;
        AllocKind kind = MapTypeToFinalizeKind<T>::kind;
        sliceBudget.step(Arena::thingsPerArena(kind));
        if (sliceBudget.isOverBudget()) {
            return false;
        }
    }

    return true;
}

IncrementalProgress
GCRuntime::sweepTypeInformation(FreeOp* fop, SliceBudget& budget, Zone* zone)
{
    // Sweep dead type information stored in scripts and object groups, but
    // don't finalize them yet. We have to sweep dead information from both live
    // and dead scripts and object groups, so that no dead references remain in
    // them. Type inference can end up crawling these zones again, such as for
    // TypeCompartment::markSetsUnknown, and if this happens after sweeping for
    // the sweep group finishes we won't be able to determine which things in
    // the zone are live.

    gcstats::AutoPhase ap1(stats(), gcstats::PhaseKind::SWEEP_COMPARTMENTS);
    gcstats::AutoPhase ap2(stats(), gcstats::PhaseKind::SWEEP_TYPES);

    ArenaLists& al = zone->arenas;

    AutoClearTypeInferenceStateOnOOM oom(zone);

    if (!SweepArenaList<JSScript>(&al.gcScriptArenasToUpdate.ref(), budget)) {
        return NotFinished;
    }

    if (!SweepArenaList<ObjectGroup>(&al.gcObjectGroupArenasToUpdate.ref(), budget)) {
        return NotFinished;
    }

    // Finish sweeping type information in the zone.
    {