<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Agnel Nieves - Tooling</title>
    <link>https://agnelnieves.com/blog/tag/tooling</link>
    <description>Blog posts on Tooling by Agnel Nieves.</description>
    <language>en-US</language>
    <lastBuildDate>Fri, 15 May 2026 01:12:13 GMT</lastBuildDate>
    <atom:link href="https://agnelnieves.com/blog/tag/tooling/feed.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title><![CDATA[Benchmarking the Rust JavaScript Toolchain in 2026: Real Numbers from a Real Migration]]></title>
      <link>https://agnelnieves.com/blog/benchmarking-the-rust-javascript-toolchain-in-2026</link>
      <guid isPermaLink="true">https://agnelnieves.com/blog/benchmarking-the-rust-javascript-toolchain-in-2026</guid>
      <description><![CDATA[I migrated this Next.js 16 site from the npm/webpack/ESLint/tsc stack to bun/Turbopack/Biome/oxlint/tsgo and measured every step. Lint is 30 to 50x faster, typecheck is 7x, install is 2.7x. Honest numbers and surprises.]]></description>
      <content:encoded><![CDATA[<h2>TL;DR</h2>
<p>Two weeks ago I <a href="/blog/rust-owns-the-javascript-toolchain-in-2026">argued</a> that by 2026 the JavaScript build pipeline is mostly Rust binaries. Then I migrated this site (Next.js 16, App Router, MDX blog, Radix components, the works) and measured every step before and after. Cold lint went from 3,631 ms to 68 ms, a <strong>53x speedup</strong>. Cold typecheck went from 3,176 ms to 222 ms (<strong>14x</strong>). Production build dropped from 19,948 ms to 10,349 ms (<strong>1.9x</strong>). Cold dependency install fell from 31.9 s to 12.0 s (<strong>2.7x</strong>). The lockfile shrank 38%. The dev tools themselves left the npm graph.</p>
<p>The one counterintuitive finding: my <code>node_modules</code> directory got <em>bigger</em>, from 1.3 GB to 1.6 GB. Native Rust binaries ship pre-compiled per platform, and they cost real disk. The savings live in install time, lockfile size, and the supply chain attack surface, not bytes on disk.</p>
<table>
<thead>
<tr>
<th>Step</th>
<th>Old</th>
<th>New</th>
<th>Speedup</th>
</tr>
</thead>
<tbody><tr>
<td>Cold dependency install</td>
<td>31.9 s (npm)</td>
<td>12.0 s (bun)</td>
<td>2.7x</td>
</tr>
<tr>
<td>Lint</td>
<td>3,631 ms (eslint)</td>
<td>68 ms (oxlint)</td>
<td><strong>53x</strong></td>
</tr>
<tr>
<td>Lint (formatter incl.)</td>
<td>3,631 ms (eslint)</td>
<td>124 ms (biome check)</td>
<td><strong>29x</strong></td>
</tr>
<tr>
<td>Typecheck</td>
<td>3,176 ms cold (tsc)</td>
<td>222 ms (tsgo)</td>
<td><strong>14x</strong></td>
</tr>
<tr>
<td>Production build, cold</td>
<td>19.9 s (webpack)</td>
<td>10.3 s (Turbopack)</td>
<td>1.9x</td>
</tr>
<tr>
<td>Dev server time-to-ready</td>
<td>684 ms (webpack)</td>
<td>433 ms (Turbopack)</td>
<td>1.6x</td>
</tr>
<tr>
<td>Lockfile size</td>
<td>669 KB (package-lock.json)</td>
<td>416 KB (bun.lock)</td>
<td>-38%</td>
</tr>
<tr>
<td>Top-level packages</td>
<td>1,362</td>
<td>875</td>
<td>-36%</td>
</tr>
<tr>
<td>node_modules size</td>
<td>1.3 GB</td>
<td>1.6 GB</td>
<td><strong>+23%</strong></td>
</tr>
</tbody></table>
<h2>The setup</h2>
<p>This is a personal site. Real, but small. The numbers below are what one design engineer ships in real life, not a synthetic mega-monorepo. The repo has:</p>
<ul>
<li>Next.js 16 (App Router)</li>
<li>13 published blog posts in MDX</li>
<li>11 components touching Radix UI (accordion, dropdown, slot)</li>
<li>Tailwind for styling, with a shadcn-style HSL CSS variable theme</li>
<li>A small Rust CLI in <code>cli/</code> that serves the <a href="/blog/building-a-terminal-portfolio-you-can-ssh-into">terminal portfolio over SSH</a></li>
<li>A handful of API routes (RSS, OG images, JSON feed, the <a href="/blog/x402-paywall-for-ai-agents">x402 paywall demo</a>)</li>
</ul>
<p>What changed:</p>
<table>
<thead>
<tr>
<th>Layer</th>
<th>Before</th>
<th>After</th>
</tr>
</thead>
<tbody><tr>
<td>Package manager</td>
<td>npm</td>
<td>bun</td>
</tr>
<tr>
<td>Bundler</td>
<td>webpack (<code>next dev/build --webpack</code>)</td>
<td>Turbopack (Next 16 default)</td>
</tr>
<tr>
<td>Linter</td>
<td>ESLint 9 + <code>eslint-config-next</code></td>
<td>Biome 2 + oxlint</td>
</tr>
<tr>
<td>Formatter</td>
<td>none</td>
<td>Biome 2</td>
</tr>
<tr>
<td>Type checker</td>
<td>TypeScript 5.9 (<code>tsc</code>)</td>
<td>TypeScript 7 beta (<code>tsgo</code>, Go-based)</td>
</tr>
<tr>
<td>CSS engine</td>
<td>Tailwind 3.4 + PostCSS + <code>tailwindcss-animate</code></td>
<td>Tailwind 4.3 + Lightning CSS + <code>tw-animate-css</code></td>
</tr>
<tr>
<td>Lockfile</td>
<td><code>package-lock.json</code></td>
<td><code>bun.lock</code> (text format)</td>
</tr>
<tr>
<td>Pre-commit</td>
<td>none</td>
<td><code>.githooks/pre-commit</code> (oxlint + biome + tsgo)</td>
</tr>
<tr>
<td>CI audit</td>
<td>none</td>
<td>OSV-Scanner + <code>bun audit</code> + <code>cargo audit</code> + <code>cargo-deny</code></td>
</tr>
</tbody></table>
<p>Eight commits, four PRs in spirit, one branch landed cleanly into <code>main</code>. The full migration log is published at <a href="/TOOLCHAIN.md">/TOOLCHAIN.md</a> for anyone who wants the move-by-move.</p>
<h2>How I measured</h2>
<p>Same machine, same wall outlet, same coffee. Apple Silicon laptop, plugged in, no other heavy processes running. Each measurement averaged over three runs. The bench script that produced these numbers lives at <code>scripts/bench.sh</code> and is short enough to read in one sitting.</p>
<p>For the old toolchain numbers, I checked out the last pre-migration commit (<code>4f00332</code>), wiped <code>node_modules</code>, ran <code>npm install</code> from scratch, then ran each tool with <code>time</code>. For the new numbers, I checked out the current <code>main</code>, wiped <code>node_modules</code>, ran <code>bun install</code>, then ran the new tools the same way. No tricks, no warm caches between toolchains.</p>
<p>A few honest caveats up front:</p>
<ol>
<li><strong>Wall-clock numbers vary.</strong> Runs jitter ~10% with background load. I averaged three samples and reported the mean for lint and typecheck. Builds and installs are single-shot; nobody runs an install five times for a benchmark.</li>
<li><strong>Time-to-ready is the easiest metric to game.</strong> Webpack reports &quot;ready&quot; before it has compiled most routes (lazy compilation). Turbopack precompiles more aggressively. The wall numbers below are end-to-end from <code>npm run dev</code> invocation to the first &quot;Ready in&quot; line, which is closer to what you actually feel.</li>
<li><strong>macOS file caching helps every run after the first.</strong> I removed <code>.next/</code> between build samples but did not flush OS-level page cache. The numbers are realistic for repeated dev cycles, not for a fresh machine.</li>
</ol>
<h2>Results</h2>
<h3>Cold dependency install</h3>
<pre><code>npm install   31,923 ms   2,205 packages installed
bun install   11,972 ms     875 top-level / 1,762 total packages
</code></pre>
<p><code>bun install</code> is 2.7x faster on this project. Both ran cold (no <code>node_modules</code>, no global package cache primed). The bun lockfile is text format (<code>bun.lock</code>, 416 KB) which diffs cleanly in pull requests. The npm one was 669 KB of binary-ish JSON noise.</p>
<p>The package count drop matters more than the time. The new lockfile has 36% fewer top-level entries, mostly because removing ESLint sheds about 60 transitive dependencies and Tailwind v4 absorbs the PostCSS chain into Lightning CSS. Smaller surface area means fewer maintainers I need to trust before a <code>bun install</code> runs code on my machine.</p>
<p>Bun&#39;s other supply-chain win is its default postinstall policy. Bun blocks postinstall scripts by default and requires packages to be explicitly listed in <code>trustedDependencies</code> in <code>package.json</code> to run them. On this repo, two packages tried to run postinstalls during install: <code>@vercel/speed-insights</code> (an analytics-id sanity check) and <code>@coinbase/x402</code> (a TOS notice). Both are informational. Bun blocked both. Net postinstalls executed during my install: zero.</p>
<h3>Lint</h3>
<pre><code>eslint    cold avg  3,631 ms  (3 samples: 3874, 3518, 3500 ms)
biome     cold avg    124 ms  (3 samples:  125,  122,  125 ms)   29x
oxlint    warm avg     68 ms  (3 samples:   67,   69,   67 ms)   53x
</code></pre>
<p>This is the headline. A linter going from 3.6 seconds to 68 milliseconds changes how you use it. ESLint at 3.6 s is a thing you run before pushing, with anxiety, sometimes deferring to &quot;I&#39;ll fix that later.&quot; Oxlint at 68 ms is a thing you run on every keystroke without thinking.</p>
<p>The Biome number includes formatter check and import-sort. Oxlint is purely the lint pass. Both are valid comparisons depending on what you want from the tool. In CI I run both: oxlint as a fast gate, Biome as the deeper check that also handles formatting.</p>
<p>What was lost from the migration: zero rules that I actually use. <code>eslint-config-next</code> ships about 60 packages and 17 Next-specific rules. Biome 2 already covers the ones we hit on this codebase: <code>noImgElement</code>, <code>noHeadElement</code>, <code>noDocumentImportInPage</code>, <code>noHeadImportInDocument</code>, <code>noNextAsyncClientComponent</code>, <code>useGoogleFontDisplay</code>, <code>useGoogleFontPreconnect</code>, <code>useExhaustiveDependencies</code>, <code>useHookAtTopLevel</code>. The rules unique to ESLint were almost all about the legacy Pages Router (<code>no-html-link-for-pages</code>, <code>no-page-custom-font</code>, <code>no-script-component-in-head</code>, <code>no-document-styled-jsx</code>). I use App Router exclusively, so they never fired anyway.</p>
<p>I dropped ESLint entirely in two lines: <code>bun remove eslint eslint-config-next</code> and one CI step removed. 60 transitive packages went with it.</p>
<h3>Typecheck</h3>
<pre><code>tsc    cold      3,176 ms
tsc    warm avg  1,601 ms  (1629, 1573 ms)
tsgo   warm avg    228 ms  (222, 221, 242 ms)
</code></pre>
<p><code>tsgo</code> is the Go-based port of TypeScript that Microsoft shipped as the <a href="https://devblogs.microsoft.com/typescript/announcing-typescript-7-0-beta">7.0 beta</a> on April 21. The package is <code>@typescript/native-preview@beta</code>. Microsoft&#39;s own announcement says you can probably start using it day-to-day, and Bloomberg, Canva, and Figma have shipped it on multi-million-line codebases. Personal portfolio is well below that bar.</p>
<p>The cold-vs-warm gap on <code>tsc</code> is real. The first run pays a JIT and incremental-cache cost, then subsequent runs reuse <code>.tsbuildinfo</code>. <code>tsgo</code> is fast cold and warm because it does not need a JIT to begin with. On this codebase the warm comparison is 1.6 s vs 0.23 s, a 7x speedup. Cold-to-warm for <code>tsc</code> versus warm <code>tsgo</code> is 14x.</p>
<p>The migration was a one-line script change. <code>tsc --noEmit</code> became <code>tsgo --noEmit</code> and that was the entire diff. I kept <code>typescript@5.9.3</code> installed alongside because Next.js&#39;s <code>tsconfig.json</code> plugin wants the regular <code>typescript</code> peer for the editor language server, and IntelliSense in VS Code falls back to it for anyone not running the Native Preview extension.</p>
<h3>Production build</h3>
<pre><code>webpack    cold  19,948 ms
turbopack  cold  10,349 ms   1.9x
</code></pre>
<p>Both built the same 45 routes (13 blog posts, 1 deck, work pages, API routes, OG image generators, sitemap, robots, RSS, JSON feed). Turbopack is the default in Next 16. The <code>--webpack</code> flag still exists if you need it, mostly for compatibility with custom webpack loaders that have not been ported.</p>
<p>This is the most modest improvement on the page (1.9x), and it lines up with what Vercel has <a href="https://nextjs.org/blog/next-16">publicly reported</a>: roughly 2 to 5x faster production builds in the wild. Build perf is harder to win than lint perf because a bundler is doing real, irreducible work (parsing, scope analysis, tree-shaking, code-splitting, minification, source maps, asset optimization) that does not shrink as much when you rewrite it in Rust. The bigger wins come from Rust&#39;s better parallelism on multicore machines.</p>
<p>For incremental builds, the gap widens significantly. I did not script that comparison cleanly enough to report numbers, but anecdotally an MDX edit goes from &quot;make a coffee&quot; to &quot;blink and miss it.&quot; The HMR loop is what you actually feel.</p>
<h3>Dev server time-to-ready</h3>
<pre><code>webpack    684 ms (Next reports: 260 ms)
turbopack  433 ms (Next reports: 294 ms)
</code></pre>
<p>This one is closer than I expected and worth being honest about. Webpack often reports &quot;Ready&quot; before it has compiled the routes you have not visited yet (lazy compilation). Turbopack precompiles more eagerly. The wall numbers above are from <code>npm run dev</code> to the first &quot;Ready in&quot; line, which is the user-visible signal.</p>
<p>The real Turbopack win is what happens after that first &quot;Ready.&quot; On a route navigation or a file edit, Turbopack invalidates and recompiles in tens of milliseconds where webpack often took hundreds. The blog post I <a href="/blog/rust-owns-the-javascript-toolchain-in-2026">linked at the top</a> cites Vercel&#39;s number of roughly 87% faster dev startup on real Next 16 apps once you account for typical edit-and-reload cycles, not just first boot.</p>
<h3>Lockfile, dependency tree, and disk</h3>
<table>
<thead>
<tr>
<th></th>
<th>Old</th>
<th>New</th>
<th>Delta</th>
</tr>
</thead>
<tbody><tr>
<td>Lockfile</td>
<td><code>package-lock.json</code> 669 KB</td>
<td><code>bun.lock</code> 416 KB</td>
<td>-38%</td>
</tr>
<tr>
<td>Top-level packages</td>
<td>1,362</td>
<td>875</td>
<td>-36%</td>
</tr>
<tr>
<td>Total packages (incl. nested)</td>
<td>2,205</td>
<td>1,762</td>
<td>-20%</td>
</tr>
<tr>
<td>node_modules size</td>
<td>1.3 GB</td>
<td>1.6 GB</td>
<td>+23%</td>
</tr>
</tbody></table>
<p>The dependency tree shrank significantly in count, but the size on disk grew. This is the most counterintuitive finding of the migration and worth dwelling on, because it cuts against the easy &quot;Rust toolchain is leaner&quot; narrative.</p>
<p>What&#39;s happening: native Rust and Go binaries ship pre-compiled per platform. <code>@biomejs/biome</code> ships about 50 MB per platform. <code>@typescript/native-preview</code> ships the tsgo Go binary at ~150 MB. <code>@tailwindcss/oxide</code> (Lightning CSS) is another ~40 MB. Bun also installs platform binaries for <code>next/swc</code> even though it&#39;s the same swc; the npm install picks just one and bun grabs all of them by default. Add it up and the new toolchain costs about 300 MB more on disk than the old one, despite needing 36% fewer packages.</p>
<p>Where the cost actually matters: install time (where bun amortizes parallel binary downloads) and supply-chain surface area (where 875 npm packages with maintainers I do not know is half as many as 1,362). The disk bytes are the cheapest part of the equation. SSDs are huge and getting cheaper. Maintainer trust does not get cheaper.</p>
<h2>What this means in practice</h2>
<p>Three things shifted in how I work after the migration.</p>
<p><strong>Pre-commit checks are now free.</strong> I added a <code>.githooks/pre-commit</code> shell script that runs oxlint, Biome, and tsgo on every commit. Total runtime on this codebase: about 750 ms. That is not a hook you grumble through, it is a hook you barely notice. The full sequence used to take 7 to 10 seconds with the old toolchain, which is exactly slow enough to make people commit with <code>--no-verify</code> after the first day.</p>
<p><strong>CI minutes get cheaper.</strong> GitHub Actions bills by the minute. The js job in <code>.github/workflows/ci.yml</code> used to spend roughly 12 to 18 seconds in the lint/typecheck/build sequence; now it spends about 4 to 6 seconds. On a public repo with frequent contributors that scales. On this private repo I switched the workflow to manual trigger only and let the pre-commit hook handle most local verification, which saves the entire workflow run for the cases where it is actually needed.</p>
<p><strong>Supply-chain hardening became practical.</strong> With the dev toolchain leaving the npm graph, the surface area shrank meaningfully. Combined with bun&#39;s default postinstall blocking, OSV-Scanner in CI, and <code>bun audit</code> plus <code>cargo audit</code> running on every push when CI is on, I can see every advisory affecting the project in one screen. The current count: 6 transitive advisories, all upstream-blocked (the web3 stack via <code>@coinbase/x402</code>, <code>@walletconnect</code>, and the <code>russh</code> dep tree on the Rust side). They surface as warnings; none are actionable from my side. That is the right state to be in.</p>
<h2>What did not improve</h2>
<p>For honesty&#39;s sake, here are the parts of the migration that did not pay off, or where the wins are smaller than the marketing suggests.</p>
<p><strong>Disk usage went up, not down.</strong> Discussed above. If you are working on a machine with constrained disk (an old Air, a CI runner with quota), this matters.</p>
<p><strong>Initial dev startup is only 1.6x faster.</strong> The HMR loop and edit-recompile cycle are dramatically better, but if you are someone who runs <code>bun run dev</code> once and leaves it open all day, the boot-time win is small. Turbopack pays its dividends incrementally.</p>
<p><strong>Production build is 1.9x faster, not 5x.</strong> The Vercel marketing materials cite up to 5x for some workloads, but this depends heavily on what your build is doing. A site with lots of MDX (like this one) spends a meaningful chunk of build time in remark and rehype plugins running pure JavaScript, and Rust does not help there. If your bottleneck is webpack&#39;s bundle pass on TypeScript, the speedup is closer to the marketing number. If it is content processing, you save less.</p>
<p><strong>Some advisories will not go away.</strong> Three of the npm advisories (<code>ws</code>, <code>postcss</code>, <code>bn.js</code>) and three of the cargo ones (<code>lru</code>, <code>paste</code>, <code>rsa</code>) are stuck waiting for upstream packages I do not control to bump. The migration improved my visibility into them but did not remove them. Anyone telling you their Rust toolchain migration &quot;fixed all advisories&quot; is lying or has no transitive deps.</p>
<p><strong>The codemods only get you 70% of the way.</strong> <code>bunx @tailwindcss/upgrade</code> crashed on this codebase mid-flight because of an <code>@apply border-border</code> ordering issue in <code>globals.css</code>. The shadcn-style HSL theme indirection had to be done manually with <code>@theme inline</code>. Plugin migrations (tailwindcss-animate to tw-animate-css) are not automated. Plan on doing manual cleanup after every codemod, even when the docs imply it is one command.</p>
<h2>Try it on your own project</h2>
<p>Drop the script below into <code>scripts/bench.sh</code> (or wherever you like) and <code>chmod +x</code> it. It works on any Node-ish project; just adjust the <code>bun run</code> calls to match your scripts. The plain-<code>time</code> fallback means you do not need <code>hyperfine</code> installed, but <code>brew install hyperfine</code> will give you cleaner numbers.</p>
<p>The recipe to capture both old and new on the same machine:</p>
<pre><code class="language-sh"># new toolchain numbers
bash scripts/bench.sh --full

# old toolchain numbers (after checking out the pre-migration commit)
git stash
git checkout &lt;pre-migration-sha&gt;
rm -rf node_modules bun.lock
npm install
bash scripts/bench.sh --full
git checkout main
rm -rf node_modules
bun install
</code></pre>
<p>The output is a markdown-shaped table you can paste straight into a writeup like this one.</p>
<h3>The script</h3>
<pre><code class="language-bash">#!/usr/bin/env bash
# Toolchain benchmark.
# Measures install, lint, typecheck, build, and dev-server-ready timings on
# the checked-out tree. Re-run on an old commit (pre-migration) and on HEAD
# to compare. Output goes to stdout as a markdown table.
#
# Usage:
#   scripts/bench.sh                # Quick run (3 samples each, no install)
#   scripts/bench.sh --full         # Full run (5 samples, includes cold install)

set -e
cd &quot;$(git rev-parse --show-toplevel)&quot;

FULL=0
[[ &quot;$1&quot; == &quot;--full&quot; ]] &amp;&amp; FULL=1

SAMPLES=3
[[ $FULL -eq 1 ]] &amp;&amp; SAMPLES=5

HAS_HYPERFINE=0
command -v hyperfine &gt;/dev/null 2&gt;&amp;1 &amp;&amp; HAS_HYPERFINE=1

H=$&#39;\033[33m&#39;
R=$&#39;\033[0m&#39;

git_short=$(git rev-parse --short HEAD)
git_msg=$(git log -1 --pretty=%s | head -c 60)
echo &quot;Benchmarking commit ${git_short}: ${git_msg}&quot;
echo &quot;Samples: ${SAMPLES}, hyperfine: $([ $HAS_HYPERFINE -eq 1 ] &amp;&amp; echo yes || echo &#39;no (using time)&#39;)&quot;
echo &quot;&quot;

run_bench() {
  local name=&quot;$1&quot;
  local cmd=&quot;$2&quot;
  printf &quot;${H}== %s ==${R}\n&quot; &quot;$name&quot;
  if [[ $HAS_HYPERFINE -eq 1 ]]; then
    hyperfine --warmup 1 --runs $SAMPLES --show-output --shell=bash &quot;$cmd&quot; 2&gt;&amp;1 | tail -8
  else
    local total=0
    for i in $(seq 1 $SAMPLES); do
      local start=$(date +%s%N)
      eval &quot;$cmd&quot; &gt;/dev/null 2&gt;&amp;1 || true
      local end=$(date +%s%N)
      local elapsed=$(( (end - start) / 1000000 ))
      total=$(( total + elapsed ))
      printf &quot;  run %d: %d ms\n&quot; $i $elapsed
    done
    printf &quot;  avg:   %d ms\n&quot; $(( total / SAMPLES ))
  fi
  echo &quot;&quot;
}

clean_next() { rm -rf .next; }

echo &quot;== Lockfile + node_modules ==&quot;
LOCKFILE=$(ls -1 bun.lock package-lock.json pnpm-lock.yaml yarn.lock 2&gt;/dev/null | head -1 || echo &quot;(none)&quot;)
LOCK_SIZE=$(du -h &quot;$LOCKFILE&quot; 2&gt;/dev/null | cut -f1 || echo &quot;?&quot;)
NM_SIZE=$(du -sh node_modules 2&gt;/dev/null | cut -f1 || echo &quot;(no node_modules)&quot;)
NM_COUNT=$(find node_modules -mindepth 1 -maxdepth 2 -type d 2&gt;/dev/null | wc -l | tr -d &#39; &#39; || echo &quot;?&quot;)
echo &quot;lockfile:       $LOCKFILE ($LOCK_SIZE)&quot;
echo &quot;node_modules:   $NM_SIZE&quot;
echo &quot;package count:  $NM_COUNT&quot;
echo &quot;&quot;

if [[ $FULL -eq 1 ]]; then
  echo &quot;${H}== Cold install (rm -rf node_modules) ==${R}&quot;
  rm -rf node_modules
  if command -v bun &gt;/dev/null 2&gt;&amp;1; then
    PKG_CMD=&quot;bun install&quot;
  elif command -v pnpm &gt;/dev/null 2&gt;&amp;1; then
    PKG_CMD=&quot;pnpm install&quot;
  else
    PKG_CMD=&quot;npm install&quot;
  fi
  echo &quot;running: $PKG_CMD&quot;
  /usr/bin/time -p $PKG_CMD 2&gt;&amp;1 | tail -5
  echo &quot;&quot;
fi

run_bench &quot;lint (biome check .)&quot; &quot;bun run lint&quot;
run_bench &quot;lint:fast (oxlint)&quot; &quot;bun run lint:fast&quot;
run_bench &quot;typecheck&quot; &quot;bun run typecheck&quot;

echo &quot;${H}== build (cold, removes .next first) ==${R}&quot;
clean_next
/usr/bin/time -p bun run build &gt; /tmp/bench-build.log 2&gt;&amp;1 || cat /tmp/bench-build.log
grep -E &quot;real |compiled|next build&quot; /tmp/bench-build.log | tail -5
echo &quot;&quot;

echo &quot;${H}== dev server time-to-ready ==${R}&quot;
clean_next
START=$(date +%s%N)
bun run dev &gt; /tmp/bench-dev.log 2&gt;&amp;1 &amp;
DEV_PID=$!
for i in $(seq 1 60); do
  if grep -q &quot;Ready in&quot; /tmp/bench-dev.log 2&gt;/dev/null; then
    END=$(date +%s%N)
    READY_MS=$(( (END - START) / 1000000 ))
    READY_LINE=$(grep &quot;Ready in&quot; /tmp/bench-dev.log | head -1)
    echo &quot;dev ready: ${READY_MS}ms (next reports: ${READY_LINE})&quot;
    break
  fi
  sleep 0.5
done
kill $DEV_PID 2&gt;/dev/null || true
wait 2&gt;/dev/null || true
echo &quot;&quot;
</code></pre>
<p>It is intentionally one file with no dependencies. If you want to extend it (add HMR latency, RSS, memory footprint), the <code>run_bench</code> helper is the only piece you need to reuse.</p>
<h2>What I would change if I were doing this again</h2>
<ol>
<li><strong>Run the codemods first, separately, on a clean branch.</strong> I bundled the codemod with the manual cleanup and ended up with one big commit instead of a clean codemod checkpoint plus a manual cleanup commit. Keep them separate so you can revert just the manual part if you mess up.</li>
<li><strong>Capture before-numbers up front.</strong> I had to check out the pre-migration commit and reinstall to get the old numbers retroactively. Running the bench on the pre-migration commit before starting saves an hour of branch swapping later.</li>
<li><strong>Move ESLint last, not first.</strong> I kept ESLint as a &quot;backstop&quot; through most of the migration, which meant carrying its 60 dependencies and one extra CI step longer than necessary. After verifying Biome&#39;s Next-rule coverage, dropping ESLint was a one-line PR I should have done day one.</li>
<li><strong>Skip the formatter PR until you are ready for the diff.</strong> Enabling Biome&#39;s formatter created a 1,440-line cosmetic diff. Worth doing, but worth doing as its own commit on a quiet day, not on top of a Tailwind migration.</li>
<li><strong>Set up the pre-commit hook on day one.</strong> The faster the local verification loop is, the more aggressively you iterate. I left this until the end and immediately wished I had done it first.</li>
</ol>
<h2>Cited sources</h2>
<ul>
<li><a href="/blog/rust-owns-the-javascript-toolchain-in-2026">&quot;Rust Owns the JavaScript Toolchain in 2026&quot;</a>, the case for the migration</li>
<li><a href="https://leerob.com/rust">Lee Robinson&#39;s &quot;Rust Is Eating JavaScript&quot;</a>, the original 2026 update</li>
<li><a href="https://nextjs.org/blog/next-16">Next.js 16 release notes</a> for Turbopack defaults</li>
<li><a href="https://tailwindcss.com/blog/tailwindcss-v4">Tailwind CSS v4 announcement</a> for the Oxide engine</li>
<li><a href="https://biomejs.dev/blog/biome-v2/">Biome v2 release post</a> for type-aware lint claims</li>
<li><a href="https://oxc.rs/blog/2025-06-10-oxlint-stable">Oxlint 1.0 stable release</a> for the 50 to 100x ESLint claim</li>
<li><a href="https://devblogs.microsoft.com/typescript/announcing-typescript-7-0-beta">TypeScript 7.0 beta announcement</a> for tsgo</li>
</ul>
<hr>
<p><em>Built by <a href="/about">Agnel Nieves</a>, a design engineer with 15+ years across product, design systems, and crypto. The full migration log lives at <a href="/TOOLCHAIN.md">/TOOLCHAIN.md</a>. More writing on <a href="/blog">the blog</a>.</em></p>
]]></content:encoded>
      <pubDate>Thu, 14 May 2026 00:00:00 GMT</pubDate>
      <author>agnel@agnelnieves.com (Agnel Nieves)</author>
      <dc:creator><![CDATA[Agnel Nieves]]></dc:creator>
      <category>Rust</category>
      <category>JavaScript</category>
      <category>Performance</category>
      <category>Benchmark</category>
      <category>Next.js</category>
      <category>Tooling</category>
    </item>
    <item>
      <title><![CDATA[Rust Owns the JavaScript Toolchain in 2026]]></title>
      <link>https://agnelnieves.com/blog/rust-owns-the-javascript-toolchain-in-2026</link>
      <guid isPermaLink="true">https://agnelnieves.com/blog/rust-owns-the-javascript-toolchain-in-2026</guid>
      <description><![CDATA[Rust replaced almost the entire JavaScript build toolchain by 2026. Turbopack, Rolldown, Biome, oxlint, Bun. Why npm security made it inevitable.]]></description>
      <content:encoded><![CDATA[<h2>TL;DR</h2>
<p>If you are shipping a modern JavaScript app in 2026, almost every step between your editor and your production bundle is a Rust binary. Next.js builds with Turbopack. Vite builds with Rolldown. Your linter and formatter are Biome or oxlint. Your CSS pipeline runs through Lightning CSS. Bun, the runtime that started in Zig, just merged its Rust rewrite into main. The result: bundlers are 5 to 30x faster, linters are 50 to 100x faster, and the dependency tree of &quot;the JavaScript toolchain&quot; has collapsed to a handful of statically linked binaries with zero npm postinstall scripts. That last detail matters more than it sounds, because the npm supply chain spent the last 18 months getting set on fire.</p>
<h2>Why I&#39;m writing this</h2>
<p>I noticed it in pieces. I shipped <a href="/blog/building-a-terminal-portfolio-you-can-ssh-into">a terminal portfolio over SSH</a> in Rust two weeks ago, and the build pipeline felt like a different planet to anything I had touched in Node years ago. Then <a href="https://nextjs.org/blog/next-16">Next.js 16</a> went stable with Turbopack as the default. Then <a href="https://vite.dev/blog/announcing-vite8">Vite 8</a> shipped with Rolldown. Then <a href="https://github.com/oven-sh/bun/pull/30412">Anthropic merged a Zig-to-Rust port of Bun</a>, mostly written with AI. Then a fresh <a href="https://www.aikido.dev/blog/mini-shai-hulud-is-back-tanstack-compromised">Mini Shai-Hulud wave</a> hit TanStack, Mistral, and 160+ npm packages three days ago, on the heels of a <a href="https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack">November Shai-Hulud campaign</a> that compromised roughly 25,000 GitHub repos. &quot;Small dependency&quot; is now a polite phrase for &quot;unsandboxed code with full filesystem access.&quot;</p>
<p>Three threads, one direction. Worth writing down.</p>
<p>I am late to this take. Lee Robinson wrote <a href="https://leerob.com/rust">&quot;Rust Is Eating JavaScript&quot;</a> and the 2026 update made the bigger point cleanly. The angles I want to add are mobile and security, because the toolchain story is only the first act.</p>
<h2>The toolchain went Rust</h2>
<p>Pick a step in the build pipeline and chances are it has been quietly rewritten in Rust.</p>
<p><strong>Turbopack.</strong> <a href="https://nextjs.org/blog/next-16">Next.js 16</a> (February 2026) shipped Turbopack as the default bundler for both <code>next dev</code> and <code>next build</code>. The numbers Vercel published before the cutover: more than half of all dev sessions and more than a fifth of production builds were already on Turbopack via 15.3+. After the default flipped, the 16.2 release in April claimed roughly 87% faster dev startup on real apps. The HMR feels different. I had stopped noticing how often I was alt-tabbing during compiles, and after switching I noticed how often I was not.</p>
<p><strong>Rolldown.</strong> <a href="https://vite.dev/blog/announcing-vite8">Vite 8</a> shipped on March 12, 2026 with <a href="https://voidzero.dev/posts/announcing-rolldown-1-0">Rolldown</a> as the bundler, taking over from the Rollup plus esbuild combination. Public production numbers from the Rolldown team: Excalidraw dropped from 22.9s to 1.4s. PLAID, an internal product at ByteDance, dropped from 80s to 5s. Both are roughly 16x. The transitional <code>rolldown-vite</code> package was archived a week after 8.0 because it was redundant. If you <code>npm create vite@latest</code> today, you get Rolldown without asking for it.</p>
<p><strong>Rspack.</strong> ByteDance&#39;s webpack-compatible <a href="https://www.infoq.com/news/2026/01/rspack-final-rust/">Rust bundler hit 1.7 in January</a> and is on the runway to 2.0. Internally they ship TikTok, Douyin, Lark, and Coze on it. Externally, Microsoft, Amazon, and Discord are in the public adopter list. The migration cases I have seen all land in the same neighborhood: 9x faster cold dev, 3x less memory, prod builds under 4 seconds on apps that used to take 30.</p>
<p><strong>Biome and oxlint.</strong> ESLint is no longer the default lint stack. <a href="https://biomejs.dev/blog/biome-v2/">Biome 2</a> added type-aware rules that do not require running the TypeScript compiler. <a href="https://oxc.rs/blog/2025-06-10-oxlint-stable">Oxlint 1.0 went stable in August</a>, ran 50 to 100x faster than ESLint on the same rulesets, and shipped a <a href="https://voidzero.dev/posts/announcing-oxlint-type-aware-linting-alpha">type-aware alpha</a> in March on top of <code>tsgolint</code> (which itself runs on Microsoft&#39;s Go-based <code>tsgo</code>). Shopify reported a 71% lint-time reduction across an ~80,000 file TypeScript codebase. The pattern teams actually use: oxlint as a pre-commit and CI gate for the hot rules, Biome for formatting, ESLint kept around for any plugin you cannot get rid of yet.</p>
<p><strong>Lightning CSS.</strong> <a href="https://tailwindcss.com/blog/tailwindcss-v4">Tailwind v4</a>&#39;s &quot;Oxide&quot; engine is <a href="https://github.com/parcel-bundler/lightningcss">Lightning CSS</a> underneath. Full builds 5x faster, incremental builds up to 100x. The PostCSS chain is gone from the default setup.</p>
<p><strong>Bun.</strong> Bun was a Zig project. In May, Anthropic (which acquired Bun last year and uses it inside Claude Code) <a href="https://github.com/oven-sh/bun/pull/30412">merged a port of the entire codebase from Zig to Rust</a> into main. The port is roughly 966,000 lines, it was largely AI-assisted, and it passes 99.8% of the existing test suite on Linux x64 glibc. Bun 1.3.14 is positioned as the last Zig release. The reasons given: Rust&#39;s memory safety story, and Zig&#39;s no-AI contribution policy clashing with how Anthropic builds tooling.</p>
<p><strong>Deno and Node.</strong> Deno&#39;s core was always Rust, and the <a href="https://deno.com/blog/v2.6">2.6 release in December</a> bumped V8 to 14.2, added <code>dx</code> to replace <code>npx</code>, and started using <code>tsgo</code> for type checking. Node 25.2 (November 2025) shipped stable TypeScript support through <a href="https://github.com/nodejs/amaro">Amaro 1.0</a>, which is a wrapper around <code>@swc/wasm-typescript</code>. Node&#39;s official type-stripping path is now SWC compiled to WebAssembly, shipping inside core. There is more Rust running inside <code>node</code> than most teams realize.</p>
<p>If you list the build steps in your CI pipeline (lint, format, type check, transpile, bundle, minify, optimize CSS) the only step still being done by JavaScript on JavaScript code in 2026 is type checking. And the most consequential type checker (Microsoft&#39;s official one) is <a href="https://devblogs.microsoft.com/typescript/typescript-native-port/">being ported to Go</a>, not Rust, by the team that wrote TypeScript in the first place. That is the one part of the ecosystem where the obvious &quot;rewrite it in Rust&quot; bet did not pay off, and it is worth saying out loud.</p>
<h2>Bigger than dev tools</h2>
<p>This is bigger than JavaScript. Microsoft has <a href="https://azure.microsoft.com/en-us/blog/microsoft-azure-security-evolution-embrace-secure-multitenancy-confidential-compute-and-rust/">publicly committed to embracing Rust across Azure infrastructure</a>, and Azure CTO Mark Russinovich has spent the last two years telling every conference audience that new kernel development at Microsoft should stop using C or C++. New code for Windows and Azure should be written in Rust. That is not a future commitment. Production components already moved: parts of <code>Win32k.sys</code>, Hyper-V, the <a href="https://www.microsoft.com/en-us/research/blog/rewriting-symcrypt-in-rust-to-modernize-microsofts-cryptographic-library/">SymCrypt cryptographic library</a>, and Azure Data Explorer. The <a href="https://devblogs.microsoft.com/azure-sdk/azure-sdk-release-march-2026/">Azure SDK for Rust</a> shipped beta in February 2025 and now releases monthly alongside the other language SDKs. At <a href="https://thenewstack.io/microsoft-goes-all-in-on-rust-for-core-infrastructure-and-much-more/">RustConf 2025</a>, Microsoft&#39;s Galen Hunt described the broader internal goal as refactoring roughly one million lines of code per month for the rest of the decade, with the aim of eliminating C and C++ from Microsoft&#39;s codebase by 2030.</p>
<p>The Linux kernel has merged Rust drivers since 6.1 (late 2022) and the <a href="https://rust-for-linux.com/">Rust-for-Linux subsystem keeps growing each release</a>. AWS rewrote <a href="https://github.com/firecracker-microvm/firecracker">Firecracker</a>, its microVM monitor, in Rust years ago and continues to add Rust services across the stack. Google has Rust in <a href="https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html">Android</a>, in parts of Chromium, and in Fuchsia. When the kernel your build tools run on is moving to Rust, the build tools moving to Rust is the predictable next step.</p>
<h2>What about mobile</h2>
<p>Mobile is a different shape and Rust shows up differently.</p>
<p>The default cross-platform stacks (React Native, Flutter) are not Rust. React Native&#39;s New Architecture (Fabric, TurboModules, JSI) is C++. Hermes is C++. Flutter is Dart plus C++. What changed in the last 18 months is the layer just above that: Rust as a shared core, exposed to those frameworks through binding generators.</p>
<p>The pattern teams actually use: <strong>write the things that should not be rewritten twice in Rust, and put a thin native UI on top of them.</strong></p>
<ul>
<li><strong>Signal</strong> does this with <a href="https://github.com/signalapp/libsignal"><code>libsignal</code></a>. The protocol, AES-GCM and other primitives, zero-knowledge groups, and remote attestation are all Rust crates. Java, Swift, and TypeScript wrappers expose them to Android, iOS, and Desktop.</li>
<li><strong>1Password</strong> rewrote its core (sync, storage, crypto, permissions) in Rust, kept native UI per platform, and built <a href="https://1password.com/blog/typeshare-for-rust">TypeShare</a> so type definitions stay consistent across the FFI boundary. One Rust core, eight clients on top.</li>
<li><strong>Mozilla</strong> ships sync, login storage, browsing history, push, and experimentation as Rust components shared between Firefox iOS and Firefox Android, all generated through <a href="https://github.com/mozilla/uniffi-rs">UniFFI</a>.</li>
<li><strong>Cloudflare</strong> ships <a href="https://github.com/cloudflare/boringtun">BoringTun</a>, a userspace WireGuard implementation in Rust, on millions of consumer iOS and Android devices through Mozilla VPN and others.</li>
<li><strong>Bitwarden</strong> is moving its cryptographic operations into a shared <code>bitwarden_core</code> Rust SDK consumed by every client.</li>
</ul>
<p>The new bit, and the reason this thread is worth pulling on right now: in December 2024 Mozilla and Filament released <a href="https://hacks.mozilla.org/2024/12/introducing-uniffi-for-react-native-rust-powered-turbo-modules/"><strong>Uniffi for React Native</strong></a>. It generates a TurboModule (TypeScript plus the JSI C++ glue) directly from a Rust crate and a UniFFI interface definition. That finally aligns React Native with the pattern Signal and 1Password have been using productively for years. Flutter has had <a href="https://github.com/fzyzcjy/flutter_rust_bridge"><code>flutter_rust_bridge</code></a> doing the same job for a while, and v2 is now an officially Flutter Favorite package with async Rust support, web targets, and zero-copy big arrays.</p>
<p>The Rust-native UI frameworks (Dioxus, Slint, egui) are real and improving. <a href="https://slint.dev/blog/slint-1.12-released">Slint added an iOS tech preview in 1.12</a> last June, with proper Xcode integration, simulator and device deploy, TestFlight, and App Store publishing. Dioxus 0.7 runs on iOS reasonably well and on Android with some pain (Huawei and Airbus are public production references). None of these are what I would reach for to build a polished consumer app today, but the trade is no longer &quot;no Rust GUI on phones.&quot; It is &quot;yes, but you should know what you are signing up for.&quot;</p>
<p>The honest tradeoffs on mobile have not gone away.</p>
<ul>
<li><strong>Toolchain pain.</strong> NDK plus <code>cargo-ndk</code> for Android. Xcode, codesigning, provisioning, and a sometimes fragile <code>lipo</code>/XCFramework dance for iOS. Every Rust target multiplies your CI matrix.</li>
<li><strong>String impedance.</strong> Rust is UTF-8, Swift is UTF-16, Java/Kotlin is &quot;modified UTF-8.&quot; Conversions are easy to get wrong, and &quot;wrong&quot; means corruption or crashes.</li>
<li><strong>FFI overhead is real.</strong> The well-worn guidance: batch your calls. One call processing 100 items beats 100 calls processing one.</li>
<li><strong>Binary size.</strong> A naive Rust core can add MB to your IPA or APK before optimization. LTO, <code>panic=abort</code>, and symbol stripping are the price of entry.</li>
<li><strong>Platform API drift.</strong> Widgets, App Intents, Live Activities, Material You, and the latest media APIs almost always live outside the Rust core, in native code. Apple and Google ship features faster than any binding generator can chase.</li>
</ul>
<p>The takeaway for mobile is the same as on the desktop side: Rust as a shared core works. Rust as the whole app on mobile is niche.</p>
<h2>And then there is npm</h2>
<p>This is the part of the story I do not think gets told enough.</p>
<p>The last 18 months were brutal for the npm registry. Three events worth grounding the rest of this on:</p>
<p><strong>September 8, 2025: <a href="https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised">qix / chalk + debug</a>.</strong> A maintainer named Josh Junon (handle <code>qix</code>) owns some of the most foundational packages in JavaScript: <code>chalk</code>, <code>debug</code>, <code>ansi-styles</code>, <code>strip-ansi</code>, <code>color-convert</code>, and a dozen more. He received a phishing email from <code>support@npmjs.help</code>, a lookalike domain registered three days earlier. The attacker captured his credentials and TOTP through an adversary-in-the-middle session and published malicious versions of 18 packages with combined weekly downloads of roughly 2 billion. <a href="https://www.exiger.com/perspectives/a-single-compromise-threatened-34-percent-of-npm/">Exiger estimated the transitive blast radius</a> at around 34% of the entire npm registry. The payload was a browser-side crypto-clipper that hooked <code>window.ethereum</code>, <code>window.solana</code>, <code>fetch</code>, and <code>XHR</code>, then swapped wallet addresses in outgoing transactions using minimum Levenshtein distance so the substitution would survive a quick visual check. Aikido flagged it within five minutes. <a href="https://www.sygnia.co/threat-reports-and-advisories/npm-supply-chain-attack-september-2025/">The malicious versions were on the registry</a> and being installed by CI worldwide for the entire window before that. Vercel published a same-day <a href="https://vercel.com/blog/critical-npm-supply-chain-attack-response-september-8-2025">advisory</a>.</p>
<p><strong>September 15, 2025: <a href="https://unit42.paloaltonetworks.com/npm-supply-chain-attack/">Shai-Hulud</a>.</strong> A week later, <code>@ctrl/tinycolor</code> (2M+ weekly downloads) was found to contain a self-replicating worm. The malware was the first true worm in the npm ecosystem: every install used the victim&#39;s npm token to enumerate other packages the maintainer owned, injected a Webpack-bundled payload, bumped the version, and republished. By the time <a href="https://www.cisa.gov/news-events/alerts/2025/09/23/widespread-supply-chain-compromise-impacting-npm-ecosystem">CISA issued its alert on September 23</a>, more than 500 packages were affected, including ones from CrowdStrike, <code>@nativescript-community</code>, and <code>@operato</code>. The payload ran TruffleHog against the host to scrape AWS keys, GCP credentials, Azure tokens, GitHub PATs, and npm tokens, then uploaded them to public GitHub repos named &quot;Shai-Hulud.&quot; <a href="https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-attack">A second wave on November 24</a> moved its hook from <code>postinstall</code> to <code>preinstall</code> (so it ran before any user-visible install output) and hit roughly 25,000 GitHub repositories. <a href="https://www.kb.cert.org/vuls/id/534320">CERT/CC VU#534320</a> called it the first credential-stealing, self-propagating worm in npm.</p>
<p><strong>May 11, 2026: <a href="https://www.aikido.dev/blog/mini-shai-hulud-is-back-tanstack-compromised">Mini Shai-Hulud</a>.</strong> Three days ago, as I write this. A threat actor StepSecurity has been tracking as TeamPCP <a href="https://tanstack.com/blog/npm-supply-chain-compromise-postmortem">hit TanStack hard</a>: 42 packages, 84 malicious versions. The same campaign caught Mistral AI, Guardrails AI, UiPath, OpenSearch, and the Bitwarden CLI. Socket flagged 416 affected packages in total; Aikido catalogued 373 malicious package-version entries across 169 names. The technique was novel: a <code>pull_request_target</code> workflow that ran attacker-controlled fork code, GitHub Actions cache poisoning across the fork/base trust boundary, and OIDC token extraction from the runner&#39;s process memory. Once inside maintainer CI, the worm did what its predecessor did: scrape credentials, find publishable packages, inject, republish.</p>
<p>The detail that mattered most: the malicious TanStack packages were <a href="https://www.stepsecurity.io/blog/mini-shai-hulud-is-back-a-self-spreading-supply-chain-attack-hits-the-npm-ecosystem">the first documented npm malware shipping with valid SLSA provenance attestations</a>. The cryptographic chain worked perfectly. The attacker had simply stolen the GitHub OIDC token used to sign builds. Provenance proves &quot;this package was built by this CI run.&quot; It does not prove the CI run was not compromised.</p>
<p>The earlier baseline (the December 2023 <a href="https://www.ledger.com/blog/security-incident-report">Ledger <code>connect-kit</code> attack</a> that drained DeFi front-ends for about five hours, the <a href="https://www.sonatype.com/blog/lottie-player-compromised-in-supply-chain-attack-all-you-need-to-know">LottieFiles compromise in October 2024</a>, the <a href="https://www.sonatype.com/blog/npm-packages-rspack-vant-compromised-blocked-by-sonatype">Rspack token theft in December 2024</a>) is what made September 2025 land as a confirmation, not a surprise. May 2026 made it a pattern.</p>
<p>The structural reason these keep happening: <strong>npm is built on install-time arbitrary code execution and ambient authority.</strong> The <code>preinstall</code>, <code>install</code>, and <code>postinstall</code> hooks run before any human can review them, with full read and write filesystem access and full network. A typical app&#39;s <code>node_modules</code> is several thousand packages from several hundred maintainers, any one of whom can phish or token-leak their way into your build. A logger and a color formatter have the same operating-system privileges as your application code, because Node has no capability-based security to draw a line between them.</p>
<p>The protective measures that shipped (npm provenance, expanded required 2FA in October and November 2025, scan tools like Socket and OSV-Scanner) help at the margins but do not change the model. A phished maintainer with a valid provenance signature still ships valid signed malware. Mini Shai-Hulud proved that this week, with attestations the GitHub Actions runner generated honestly while compromised. 2FA does not stop an adversary-in-the-middle proxy. Detection time has tightened from days to minutes, which is real progress, but the first thousand installs of a poisoned package still happen.</p>
<h2>Why this makes Rust tooling load-bearing</h2>
<p>Moving the build toolchain to Rust does not fix the structural problems in your runtime dependencies. Your app&#39;s <code>node_modules</code> is still a forest of trust assumptions. What it does change is that the <strong>build</strong> toolchain (the most-installed, most-privileged surface area in the average project) leaves the npm graph entirely.</p>
<p>A linter, a formatter, a bundler, a test runner, and a transpiler are pure dev-time tools with read access to your whole codebase and write access to your machine. They are also among the most-installed packages on npm, which is exactly what made their maintainers attractive phishing targets. Replacing them with one statically linked Rust binary collapses that surface. There is no <code>postinstall</code>. There is no transitive graph. There is a single maintainer organization whose key you can pin.</p>
<p>Concrete numbers from a clean <code>npm install</code> I ran this week:</p>
<table>
<thead>
<tr>
<th>Toolchain choice</th>
<th>Top-level deps</th>
<th>Total packages</th>
<th>Disk</th>
</tr>
</thead>
<tbody><tr>
<td><code>eslint</code> + <code>prettier</code></td>
<td>59</td>
<td>77</td>
<td>20 MB</td>
</tr>
<tr>
<td><code>@biomejs/biome</code> (Rust)</td>
<td>1 platform binary wrapper</td>
<td>1 user-facing</td>
<td>48 MB binary</td>
</tr>
<tr>
<td><code>rollup</code></td>
<td>3</td>
<td>4</td>
<td>4.5 MB</td>
</tr>
<tr>
<td><code>rolldown</code> (Rust)</td>
<td>3 platform binary wrapper</td>
<td>4</td>
<td>19 MB</td>
</tr>
</tbody></table>
<p>Biome&#39;s own docs report that a realistic ESLint + Prettier + typical plugins setup lands closer to 127 to 200 packages once plugin ecosystems get involved. The 59 above is the bare minimum.</p>
<p>Cargo has its own supply-chain risks (typo-squats, the long-running debate about <code>serde</code>&#39;s precompiled binaries, the occasional malicious crate). What it does not have is a postinstall hook that fires arbitrary code on every developer machine for every transitive build dependency, on every install, forever. That is the difference that matters.</p>
<h2>Why Rust specifically</h2>
<p>Two reasons. The performance numbers are the easy half: 5 to 30x for bundlers, 50 to 100x for linters, 3 to 4x lower memory across the board. That alone would explain the move.</p>
<p>The harder half is everything else.</p>
<ul>
<li><strong>Memory safety without a GC.</strong> Long-running dev servers and HMR processes do not want to pause for 200ms at the wrong moment. GC-based runtimes (Go, JS) hit this; Rust does not.</li>
<li><strong>Single-binary distribution.</strong> CLIs like <code>oxlint</code> and <code>biome</code> ship as one executable per platform. No Node bootstrap, no thousand <code>require()</code> calls, no JS launcher script wrapping a native binary wrapping the actual logic.</li>
<li><strong>Fearless parallelism.</strong> <code>rayon</code> and <code>tokio</code> let linters and bundlers fan out across cores cleanly. The JS event loop is fast but it is one core at a time.</li>
<li><strong>WebAssembly as an escape hatch.</strong> Node 25.2 ships SWC as wasm inside core. Browser playgrounds run Oxc directly. The same Rust crate can serve a CLI, a Node API, and a browser sandbox without rewriting any of the hot path.</li>
</ul>
<p>It is also worth saying clearly: Rust is not the only winner. TypeScript&#39;s compiler is <a href="https://devblogs.microsoft.com/typescript/typescript-native-port/">being ported to <strong>Go</strong></a>, not Rust, by Anders Hejlsberg and the team that wrote TypeScript in the first place. <a href="https://visualstudiomagazine.com/articles/2026/04/21/typescript-7-0-beta-arrives-on-go-based-foundation-with-10x-speed-claim.aspx">The 7.0 beta dropped on April 21, 2026</a> with a 10x speed claim. Bun started in <strong>Zig</strong> and only just switched. The honest story is &quot;native code ate JS tooling, mostly Rust, but not entirely.&quot;</p>
<h2>What I would build with this stack today</h2>
<p>If I were starting a new app this week:</p>
<ul>
<li><code>next@16</code> or <code>vite@8</code> for the framework. Both Rust bundlers underneath.</li>
<li><code>@biomejs/biome</code> for formatting plus a lint baseline. <code>oxlint</code> as a CI gate for the type-aware rules.</li>
<li>Tailwind v4 with the Oxide engine.</li>
<li>A Rust core library for anything cryptographic, anything sync-heavy, or anything I would want to share between web and mobile, exposed through UniFFI or <code>flutter_rust_bridge</code> if mobile is on the roadmap.</li>
<li><code>bun</code> or <code>pnpm</code> as the package manager. Lockfile discipline matters more than the manager you pick.</li>
</ul>
<p>The whole pipeline would have one or two Node processes that exist mainly to host the dev server and run user code. Everything else underneath would be Rust binaries doing the heavy work. That is not aspirational anymore. That is just <code>npm create</code>.</p>
<h2>The honest limitations</h2>
<p>Three things I would rather not glaze over.</p>
<p><strong>Plugin ecosystems fragment.</strong> Every Rust tool eventually faces the same fork: stay in Rust (fast, inaccessible to most JS contributors) or expose a JS plugin API (slower, ergonomic). Rolldown went hard on Rollup-plugin compatibility for adoption. Oxlint shipped a JS plugin alpha in March because too many ESLint rules people care about live in plugin land. Marvin Hagemeister&#39;s <a href="https://marvinh.dev/blog/speeding-up-javascript-ecosystem-part-11/">Speeding up the JavaScript ecosystem</a> series is the canonical write-up of this tension and worth reading if you are considering migrating.</p>
<p><strong>The contributor bus factor is real.</strong> Rust is a learning curve for JS-native maintainers. Concentration on Turbopack, Oxc, and Biome is worth pricing in if you are building on top of these tools.</p>
<p><strong>Memory regressions happen.</strong> Rolldown 1.0 RC.18 shipped with about 7x higher RSS than Vite 7 in dev for some apps before the fixes landed. Bundlers in Rust are not automatically more memory-efficient. They are faster, but the wins live in CPU time, not necessarily memory.</p>
<h2>Try it yourself</h2>
<p>If you have not migrated a project off ESLint yet, <code>bunx @biomejs/biome init</code> plus <code>bun add -d oxlint</code> will take you 15 minutes. The Biome migration commands (<code>biome migrate eslint</code>, <code>biome migrate prettier</code>) are doing the right thing in 2026. The Vite 8 upgrade is a <code>npm install vite@8</code> for most apps. Next.js 16 is the same story; the Turbopack flag is just gone now.</p>
<p>The faster builds are the part people notice. The smaller dependency tree is the part your security team should notice. The shared Rust core that could ship to mobile next quarter is the part your CTO should notice.</p>
<p>It all started with one team being annoyed enough at webpack to write SWC. We got here by accident.</p>
<hr>
<p><em>Built by <a href="/about">Agnel Nieves</a>, a design engineer with 15+ years across product, design systems, and crypto. More writing on <a href="/blog">the blog</a>.</em></p>
]]></content:encoded>
      <pubDate>Thu, 14 May 2026 00:00:00 GMT</pubDate>
      <author>agnel@agnelnieves.com (Agnel Nieves)</author>
      <dc:creator><![CDATA[Agnel Nieves]]></dc:creator>
      <category>Rust</category>
      <category>JavaScript</category>
      <category>Web Development</category>
      <category>Security</category>
      <category>Tooling</category>
      <category>Mobile</category>
    </item>
  </channel>
</rss>