Bug 1534630: Slightly optimize scheduling of clang-format jobs; r=andi
authorBenjamin Bouvier <benj@benj.me>
Tue, 12 Mar 2019 16:30:12 +0000
changeset 521671 bad1df2f11e3
parent 521670 af0f103d341d
child 521672 00fe102f6c06
push id10867
push userdvarga@mozilla.com
push dateThu, 14 Mar 2019 15:20:45 +0000
treeherdermozilla-beta@abad13547875 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersandi
bugs1534630
milestone67.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1534630: Slightly optimize scheduling of clang-format jobs; r=andi Instead of over-estimating the number of items in a batch, do the opposite: slightly under-estimate the number of items, then dispatch outstanding items (by just adding one item to each batch). Differential Revision: https://phabricator.services.mozilla.com/D23141
python/mozbuild/mozbuild/mach_commands.py
--- a/python/mozbuild/mozbuild/mach_commands.py
+++ b/python/mozbuild/mozbuild/mach_commands.py
@@ -2904,21 +2904,33 @@ class StaticAnalysis(MachCommandBase):
             return 0
 
         # Run clang-format in parallel trying to saturate all of the available cores.
         import concurrent.futures
         import multiprocessing
         import math
 
         max_workers = multiprocessing.cpu_count()
-        batchsize = int(math.ceil(float(len(path_list)) / max_workers))
+
+        # To maximize CPU usage when there are few items to handle,
+        # underestimate the number of items per batch, then dispatch
+        # outstanding items across workers. Per definition, each worker will
+        # handle at most one outstanding item.
+        batch_size = int(math.floor(float(len(path_list)) / max_workers))
+        outstanding_items = len(path_list) - batch_size * max_workers
 
         batches = []
-        for i in range(0, len(path_list), batchsize):
-            batches.append(args + path_list[i: (i + batchsize)])
+
+        i = 0
+        while i < len(path_list):
+            num_items = batch_size + (1 if outstanding_items > 0 else 0)
+            batches.append(args + path_list[i: (i + num_items)])
+
+            outstanding_items -= 1
+            i += num_items
 
         error_code = None
 
         with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
             futures = []
             for batch in batches:
                 futures.append(executor.submit(run_one_clang_format_batch, batch))