Skip to content

Severe performance regression with spawn/spawnSync PATH resolution (Mac) #62554

@jhmaster2000

Description

@jhmaster2000

Version

24.14.0, 24.14.1, 25.6.0, 25.6.1

Platform

Darwin M3Max.local 24.6.0 Darwin Kernel Version 24.6.0: Wed Nov  5 21:32:38 PST 2025; root:xnu-11417.140.69.705.2~1/RELEASE_ARM64_T6031 arm64

Subsystem

child_process

What steps will reproduce the bug?

This regression occurs on both v24 from 24.14.0 to 24.14.1 and v25 from 25.6.0 to 25.6.1.

v24.14.1 and v25.6.1 are both around 2x slower to execute a child_process.spawnSync call for a simple CLI app on an isolated repro benchmark, but upwards of 10x slower on a real world application with heavyweight spawnSync's (long lived, large stdout/stderr, large cli args list), when compared to v24.14.0 and v25.6.0 on macOS (Also tested on Linux and unable to reproduce, even after the PATH resolution discovery. Not tested on Windows).

Given the following minimal sample project:

  • nodeperf.mjs
#!/usr/bin/env node
import chp from 'node:child_process';
const clang = chp.spawnSync('clang', ['--version'], { encoding: 'utf8' });
console.log(clang.stdout);
  • package.json
{
  "name": "nodeperftest",
  "version": "1.0.0",
  "main": "nodeperf.mjs",
  "bin": {
    "nodeperftest": "nodeperf.mjs"
  },
  "type": "module"
}

For added context, my npm version is 11.11.0, n= 10.2.0 and hyperfine = 1.20.0

I have ran the following benchmarks using n to test different Node versions and hyperfine:

> hyperfine --warmup 3 'n exec 24.14.0 node nodeperf.mjs' 'n exec 24.14.1 node nodeperf.mjs'
Benchmark 1: n exec 24.14.0 node nodeperf.mjs
  Time (mean ± σ):      48.5 ms ±   0.9 ms    [User: 24.2 ms, System: 8.4 ms]
  Range (min … max):    46.5 ms …  51.6 ms    59 runs
 
Benchmark 2: n exec 24.14.1 node nodeperf.mjs
  Time (mean ± σ):      61.0 ms ±   1.5 ms    [User: 27.1 ms, System: 9.4 ms]
  Range (min … max):    57.7 ms …  67.6 ms    46 runs
 
Summary
  n exec 24.14.0 node nodeperf.mjs ran
    1.26 ± 0.04 times faster than n exec 24.14.1 node nodeperf.mjs

Notice how even here really, 24.14.1 is already showing signs of slowdown, but not that bad yet, I probably wouldn't have noticed this.

> hyperfine --warmup 3 'n exec 25.6.0 node nodeperf.mjs' 'n exec 25.6.1 node nodeperf.mjs'
Benchmark 1: n exec 25.6.0 node nodeperf.mjs
  Time (mean ± σ):      49.1 ms ±   1.1 ms    [User: 26.1 ms, System: 8.5 ms]
  Range (min … max):    47.2 ms …  52.8 ms    54 runs
 
Benchmark 2: n exec 25.6.1 node nodeperf.mjs
  Time (mean ± σ):      62.9 ms ±   1.3 ms    [User: 27.1 ms, System: 9.6 ms]
  Range (min … max):    60.5 ms …  68.2 ms    46 runs
 
Summary
  n exec 25.6.0 node nodeperf.mjs ran
    1.28 ± 0.04 times faster than n exec 25.6.1 node nodeperf.mjs

The v25 versions achieve a near identical result here.

Next we need to run npm link to symlink and install our sample project as a global CLI command, nodeperftest, which we then benchmark in the exact same way:

> hyperfine --warmup 3 'n exec 24.14.0 nodeperftest' 'n exec 24.14.1 nodeperftest'
Benchmark 1: n exec 24.14.0 nodeperftest
  Time (mean ± σ):      50.2 ms ±   1.2 ms    [User: 24.9 ms, System: 8.9 ms]
  Range (min … max):    48.6 ms …  57.2 ms    55 runs
 
Benchmark 2: n exec 24.14.1 nodeperftest
  Time (mean ± σ):      97.8 ms ±   2.7 ms    [User: 29.1 ms, System: 10.3 ms]
  Range (min … max):    94.8 ms … 105.4 ms    27 runs
 
Summary
  n exec 24.14.0 nodeperftest ran
    1.95 ± 0.07 times faster than n exec 24.14.1 nodeperftest
> hyperfine --warmup 3 'n exec 25.6.0 nodeperftest' 'n exec 25.6.1 nodeperftest'
Benchmark 1: n exec 25.6.0 nodeperftest
  Time (mean ± σ):      51.2 ms ±   1.3 ms    [User: 26.9 ms, System: 8.9 ms]
  Range (min … max):    49.2 ms …  57.0 ms    55 runs
 
Benchmark 2: n exec 25.6.1 nodeperftest
  Time (mean ± σ):     100.0 ms ±   1.6 ms    [User: 29.5 ms, System: 10.2 ms]
  Range (min … max):    97.9 ms … 105.8 ms    27 runs
 
Summary
  n exec 25.6.0 nodeperftest ran
    1.95 ± 0.06 times faster than n exec 25.6.1 nodeperftest

...and now the .1 versions are even slower, by nearly 2x! Note that the absolute time has effectively not changed at all for the .0 versions, so they suffer no slowdown regardless of execution method.

I have also profiled my full application in which I first observed the issue after noticing it suddenly feel ~10x slower, below are the profiling results:

v25.6.0

Image

v25.6.1

Image

With only 6 spawnSync's across the project, what was once a total ~78ms operation is now ~700ms, a 9x slowdown!

Note: Profiles were taken from running my application via the npm link CLI symlink, using NODE_OPTIONS="--inspect --inspect-brk".

I've tried comparing the changelogs of v24.14.1 and v25.6.1 to see if I could spot an obvious common change between them to blame but couldn't really spot anything clear to me.

How often does it reproduce? Is there a required condition?

Always.

Minor slowdown with direct node file.js execution.

Major slowdown with npm link'd symlink execution.

What is the expected behavior? Why is that the expected behavior?

No performance loss.

What do you see instead?

Severe performance loss.

Additional information

After further testing I can also report the asynchronous spawn is also affected:

> hyperfine --warmup 3 'n exec 24.14.0 nodeperftest' 'n exec 24.14.1 nodeperftest' 'n exec 24.14.0 node nodeperf.mjs' 'n exec 24.14.1 node nodeperf.mjs'
Benchmark 1: n exec 24.14.0 nodeperftest
  Time (mean ± σ):      49.2 ms ±   1.1 ms    [User: 26.3 ms, System: 8.9 ms]
  Range (min … max):    47.7 ms …  54.1 ms    57 runs
 
Benchmark 2: n exec 24.14.1 nodeperftest
  Time (mean ± σ):      97.8 ms ±   2.9 ms    [User: 30.4 ms, System: 10.1 ms]
  Range (min … max):    95.5 ms … 110.5 ms    26 runs
 
Benchmark 3: n exec 24.14.0 node nodeperf.mjs
  Time (mean ± σ):      48.6 ms ±   2.1 ms    [User: 25.9 ms, System: 8.9 ms]
  Range (min … max):    46.2 ms …  56.9 ms    58 runs
 
Benchmark 4: n exec 24.14.1 node nodeperf.mjs
  Time (mean ± σ):      61.6 ms ±   2.2 ms    [User: 28.4 ms, System: 9.6 ms]
  Range (min … max):    59.2 ms …  73.5 ms    45 runs
 
Summary
  n exec 24.14.0 node nodeperf.mjs ran
    1.01 ± 0.05 times faster than n exec 24.14.0 nodeperftest
    1.27 ± 0.07 times faster than n exec 24.14.1 node nodeperf.mjs
    2.01 ± 0.11 times faster than n exec 24.14.1 nodeperftest
> hyperfine --warmup 3 'n exec 25.6.0 nodeperftest' 'n exec 25.6.1 nodeperftest' 'n exec 25.6.0 node nodeperf.mjs' 'n exec 25.6.1 node nodeperf.mjs'
Benchmark 1: n exec 25.6.0 nodeperftest
  Time (mean ± σ):      51.4 ms ±   1.1 ms    [User: 28.2 ms, System: 9.1 ms]
  Range (min … max):    49.3 ms …  54.5 ms    55 runs
 
Benchmark 2: n exec 25.6.1 nodeperftest
  Time (mean ± σ):     101.1 ms ±   1.8 ms    [User: 31.1 ms, System: 10.4 ms]
  Range (min … max):    98.2 ms … 105.3 ms    27 runs
 
Benchmark 3: n exec 25.6.0 node nodeperf.mjs
  Time (mean ± σ):      54.9 ms ±   4.1 ms    [User: 30.3 ms, System: 10.1 ms]
  Range (min … max):    48.8 ms …  67.9 ms    54 runs
 
Benchmark 4: n exec 25.6.1 node nodeperf.mjs
  Time (mean ± σ):      63.3 ms ±   1.3 ms    [User: 28.6 ms, System: 9.6 ms]
  Range (min … max):    61.7 ms …  67.6 ms    46 runs
 
Summary
  n exec 25.6.0 nodeperftest ran
    1.07 ± 0.08 times faster than n exec 25.6.0 node nodeperf.mjs
    1.23 ± 0.04 times faster than n exec 25.6.1 node nodeperf.mjs
    1.97 ± 0.06 times faster than n exec 25.6.1 nodeperftest

Updated script: (Same package.json)

#!/usr/bin/env node
import chp from 'node:child_process';
const clang = chp.spawn('clang', ['--version']);
clang.stdout.setEncoding('utf8');
clang.stdout.on('data', (data) => console.log(data));
clang.on('error', (err) => console.error(err));

You can even notice spawn being affected in my application profiles as well, as I have realized the large (idle) gap right in the middle is the time spent waiting for 10-20 parallel spawn()'s the program initiates. Notice the (idle) total time increase from ~73ms to nearly 400ms.

Update 2 (PATH resolution)

Upon further testing I've realized that a majority (but not all) of the slowdown seems to be coming from having to resolve a command that is in PATH? If I take the updated script using chp.spawn( above and just replace 'clang' with '/usr/bin/clang' (the exact path it is at on my Mac), then re-run the benchmark:

hyperfine --warmup 10 'n exec 25.6.0 nodeperftest' 'n exec 25.6.1 nodeperftest' 'n exec 25.6.0 node nodeperf.mjs' 'n exec 25.6.1 node nodeperf.mjs'
Benchmark 1: n exec 25.6.0 nodeperftest
  Time (mean ± σ):      50.5 ms ±   1.4 ms    [User: 28.4 ms, System: 8.7 ms]
  Range (min … max):    48.8 ms …  55.9 ms    55 runs
 
Benchmark 2: n exec 25.6.1 nodeperftest
  Time (mean ± σ):      56.0 ms ±   0.8 ms    [User: 28.3 ms, System: 8.7 ms]
  Range (min … max):    53.9 ms …  57.5 ms    51 runs
 
Benchmark 3: n exec 25.6.0 node nodeperf.mjs
  Time (mean ± σ):      48.6 ms ±   1.1 ms    [User: 27.4 ms, System: 8.3 ms]
  Range (min … max):    46.5 ms …  52.5 ms    57 runs
 
Benchmark 4: n exec 25.6.1 node nodeperf.mjs
  Time (mean ± σ):      55.0 ms ±   1.1 ms    [User: 27.6 ms, System: 8.5 ms]
  Range (min … max):    52.6 ms …  59.2 ms    52 runs
 
Summary
  n exec 25.6.0 node nodeperf.mjs ran
    1.04 ± 0.04 times faster than n exec 25.6.0 nodeperftest
    1.13 ± 0.03 times faster than n exec 25.6.1 node nodeperf.mjs
    1.15 ± 0.03 times faster than n exec 25.6.1 nodeperftest

Notice the .1 versions still are objectively slower, but at least to a much lesser degree, indicating that most of the slowdown seems to be in whatever code inside spawn/spawnSync resolves an indirect input command from PATH?

exec/execSync

For added context I have also tested these two, with the following scripts (same package.json)

#!/usr/bin/env node
import chp from 'node:child_process';
chp.exec('clang --version', (err, stdout) => console.log(stdout));
#!/usr/bin/env node
import chp from 'node:child_process';
console.log(chp.execSync('clang --version').toString('utf8'));

Benchmark:

> hyperfine --warmup 10 'n exec 25.6.0 nodeperftest' 'n exec 25.6.1 nodeperftest' 'n exec 25.6.0 node nodeperf.mjs' 'n exec 25.6.1 node nodeperf.mjs'
Benchmark 1: n exec 25.6.0 nodeperftest
  Time (mean ± σ):      53.3 ms ±   1.6 ms    [User: 28.0 ms, System: 9.7 ms]
  Range (min … max):    50.5 ms …  58.7 ms    54 runs
 
Benchmark 2: n exec 25.6.1 nodeperftest
  Time (mean ± σ):      60.4 ms ±   3.6 ms    [User: 28.4 ms, System: 9.9 ms]
  Range (min … max):    57.0 ms …  76.9 ms    50 runs
 
Benchmark 3: n exec 25.6.0 node nodeperf.mjs
  Time (mean ± σ):      50.5 ms ±   0.9 ms    [User: 27.1 ms, System: 8.8 ms]
  Range (min … max):    49.0 ms …  53.0 ms    56 runs
 
Benchmark 4: n exec 25.6.1 node nodeperf.mjs
  Time (mean ± σ):      56.9 ms ±   1.7 ms    [User: 27.4 ms, System: 9.1 ms]
  Range (min … max):    54.4 ms …  62.2 ms    49 runs
 
Summary
  n exec 25.6.0 node nodeperf.mjs ran
    1.06 ± 0.04 times faster than n exec 25.6.0 nodeperftest
    1.13 ± 0.04 times faster than n exec 25.6.1 node nodeperf.mjs
    1.20 ± 0.07 times faster than n exec 25.6.1 nodeperftest

The specific benchmark above is for execSync('clang --version') but I have also benchmarked execSync('/usr/bin/clang') and both variants with exec( too, all 4 scored about the same between each other hence I am only showing one result for reference.

So it would appear the exec* functions are unaffected by the PATH resolution (expected given they simply pass their input directly to the shell), but they still seem to also be affected by whatever the cause of the smaller but still observable slowdown in child process spawning is on the .1 node releases...

Update 3

I've also now tested the spawn* functions again with 'clang' as command but with the shell: true option set, and indeed that also eliminates the large performance drop and makes them match the exec* benchmarks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions