I migrated this Next.js 16 site from the npm/webpack/ESLint/tsc stack to bun/Turbopack/Biome/oxlint/tsgo and measured every step. Lint is 30 to 50x faster, typecheck is 7x, install is 2.7x. Honest numbers and surprises.
Benchmarking the Rust JavaScript Toolchain in 2026: Real Numbers from a Real Migration
TL;DR
Two weeks ago I argued that by 2026 the JavaScript build pipeline is mostly Rust binaries. Then I migrated this site (Next.js 16, App Router, MDX blog, Radix components, the works) and measured every step before and after. Cold lint went from 3,631 ms to 68 ms, a 53x speedup. Cold typecheck went from 3,176 ms to 222 ms (14x). Production build dropped from 19,948 ms to 10,349 ms (1.9x). Cold dependency install fell from 31.9 s to 12.0 s (2.7x). The lockfile shrank 38%. The dev tools themselves left the npm graph.
The one counterintuitive finding: my node_modules directory got bigger, from 1.3 GB to 1.6 GB. Native Rust binaries ship pre-compiled per platform, and they cost real disk. The savings live in install time, lockfile size, and the supply chain attack surface, not bytes on disk.
| Step | Old | New | Speedup |
|---|---|---|---|
| Cold dependency install | 31.9 s (npm) | 12.0 s (bun) | 2.7x |
| Lint | 3,631 ms (eslint) | 68 ms (oxlint) | 53x |
| Lint (formatter incl.) | 3,631 ms (eslint) | 124 ms (biome check) | 29x |
| Typecheck | 3,176 ms cold (tsc) | 222 ms (tsgo) | 14x |
| Production build, cold | 19.9 s (webpack) | 10.3 s (Turbopack) | 1.9x |
| Dev server time-to-ready | 684 ms (webpack) | 433 ms (Turbopack) | 1.6x |
| Lockfile size | 669 KB (package-lock.json) | 416 KB (bun.lock) | -38% |
| Top-level packages | 1,362 | 875 | -36% |
| node_modules size | 1.3 GB | 1.6 GB | +23% |
The setup
This is a personal site. Real, but small. The numbers below are what one design engineer ships in real life, not a synthetic mega-monorepo. The repo has:
- Next.js 16 (App Router)
- 13 published blog posts in MDX
- 11 components touching Radix UI (accordion, dropdown, slot)
- Tailwind for styling, with a shadcn-style HSL CSS variable theme
- A small Rust CLI in
cli/that serves the terminal portfolio over SSH - A handful of API routes (RSS, OG images, JSON feed, the x402 paywall demo)
What changed:
| Layer | Before | After |
|---|---|---|
| Package manager | npm | bun |
| Bundler | webpack (next dev/build --webpack) | Turbopack (Next 16 default) |
| Linter | ESLint 9 + eslint-config-next | Biome 2 + oxlint |
| Formatter | none | Biome 2 |
| Type checker | TypeScript 5.9 (tsc) | TypeScript 7 beta (tsgo, Go-based) |
| CSS engine | Tailwind 3.4 + PostCSS + tailwindcss-animate | Tailwind 4.3 + Lightning CSS + tw-animate-css |
| Lockfile | package-lock.json | bun.lock (text format) |
| Pre-commit | none | .githooks/pre-commit (oxlint + biome + tsgo) |
| CI audit | none | OSV-Scanner + bun audit + cargo audit + cargo-deny |
Eight commits, four PRs in spirit, one branch landed cleanly into main. The full migration log is published at /TOOLCHAIN.md for anyone who wants the move-by-move.
How I measured
Same machine, same wall outlet, same coffee. Apple Silicon laptop, plugged in, no other heavy processes running. Each measurement averaged over three runs. The bench script that produced these numbers lives at scripts/bench.sh and is short enough to read in one sitting.
For the old toolchain numbers, I checked out the last pre-migration commit (4f00332), wiped node_modules, ran npm install from scratch, then ran each tool with time. For the new numbers, I checked out the current main, wiped node_modules, ran bun install, then ran the new tools the same way. No tricks, no warm caches between toolchains.
A few honest caveats up front:
- Wall-clock numbers vary. Runs jitter ~10% with background load. I averaged three samples and reported the mean for lint and typecheck. Builds and installs are single-shot; nobody runs an install five times for a benchmark.
- Time-to-ready is the easiest metric to game. Webpack reports "ready" before it has compiled most routes (lazy compilation). Turbopack precompiles more aggressively. The wall numbers below are end-to-end from
npm run devinvocation to the first "Ready in" line, which is closer to what you actually feel. - macOS file caching helps every run after the first. I removed
.next/between build samples but did not flush OS-level page cache. The numbers are realistic for repeated dev cycles, not for a fresh machine.
Results
Cold dependency install
npm install 31,923 ms 2,205 packages installed
bun install 11,972 ms 875 top-level / 1,762 total packages
bun install is 2.7x faster on this project. Both ran cold (no node_modules, no global package cache primed). The bun lockfile is text format (bun.lock, 416 KB) which diffs cleanly in pull requests. The npm one was 669 KB of binary-ish JSON noise.
The package count drop matters more than the time. The new lockfile has 36% fewer top-level entries, mostly because removing ESLint sheds about 60 transitive dependencies and Tailwind v4 absorbs the PostCSS chain into Lightning CSS. Smaller surface area means fewer maintainers I need to trust before a bun install runs code on my machine.
Bun's other supply-chain win is its default postinstall policy. Bun blocks postinstall scripts by default and requires packages to be explicitly listed in trustedDependencies in package.json to run them. On this repo, two packages tried to run postinstalls during install: @vercel/speed-insights (an analytics-id sanity check) and @coinbase/x402 (a TOS notice). Both are informational. Bun blocked both. Net postinstalls executed during my install: zero.
Lint
eslint cold avg 3,631 ms (3 samples: 3874, 3518, 3500 ms)
biome cold avg 124 ms (3 samples: 125, 122, 125 ms) 29x
oxlint warm avg 68 ms (3 samples: 67, 69, 67 ms) 53x
This is the headline. A linter going from 3.6 seconds to 68 milliseconds changes how you use it. ESLint at 3.6 s is a thing you run before pushing, with anxiety, sometimes deferring to "I'll fix that later." Oxlint at 68 ms is a thing you run on every keystroke without thinking.
The Biome number includes formatter check and import-sort. Oxlint is purely the lint pass. Both are valid comparisons depending on what you want from the tool. In CI I run both: oxlint as a fast gate, Biome as the deeper check that also handles formatting.
What was lost from the migration: zero rules that I actually use. eslint-config-next ships about 60 packages and 17 Next-specific rules. Biome 2 already covers the ones we hit on this codebase: noImgElement, noHeadElement, noDocumentImportInPage, noHeadImportInDocument, noNextAsyncClientComponent, useGoogleFontDisplay, useGoogleFontPreconnect, useExhaustiveDependencies, useHookAtTopLevel. The rules unique to ESLint were almost all about the legacy Pages Router (no-html-link-for-pages, no-page-custom-font, no-script-component-in-head, no-document-styled-jsx). I use App Router exclusively, so they never fired anyway.
I dropped ESLint entirely in two lines: bun remove eslint eslint-config-next and one CI step removed. 60 transitive packages went with it.
Typecheck
tsc cold 3,176 ms
tsc warm avg 1,601 ms (1629, 1573 ms)
tsgo warm avg 228 ms (222, 221, 242 ms)
tsgo is the Go-based port of TypeScript that Microsoft shipped as the 7.0 beta on April 21. The package is @typescript/native-preview@beta. Microsoft's own announcement says you can probably start using it day-to-day, and Bloomberg, Canva, and Figma have shipped it on multi-million-line codebases. Personal portfolio is well below that bar.
The cold-vs-warm gap on tsc is real. The first run pays a JIT and incremental-cache cost, then subsequent runs reuse .tsbuildinfo. tsgo is fast cold and warm because it does not need a JIT to begin with. On this codebase the warm comparison is 1.6 s vs 0.23 s, a 7x speedup. Cold-to-warm for tsc versus warm tsgo is 14x.
The migration was a one-line script change. tsc --noEmit became tsgo --noEmit and that was the entire diff. I kept typescript@5.9.3 installed alongside because Next.js's tsconfig.json plugin wants the regular typescript peer for the editor language server, and IntelliSense in VS Code falls back to it for anyone not running the Native Preview extension.
Production build
webpack cold 19,948 ms
turbopack cold 10,349 ms 1.9x
Both built the same 45 routes (13 blog posts, 1 deck, work pages, API routes, OG image generators, sitemap, robots, RSS, JSON feed). Turbopack is the default in Next 16. The --webpack flag still exists if you need it, mostly for compatibility with custom webpack loaders that have not been ported.
This is the most modest improvement on the page (1.9x), and it lines up with what Vercel has publicly reported: roughly 2 to 5x faster production builds in the wild. Build perf is harder to win than lint perf because a bundler is doing real, irreducible work (parsing, scope analysis, tree-shaking, code-splitting, minification, source maps, asset optimization) that does not shrink as much when you rewrite it in Rust. The bigger wins come from Rust's better parallelism on multicore machines.
For incremental builds, the gap widens significantly. I did not script that comparison cleanly enough to report numbers, but anecdotally an MDX edit goes from "make a coffee" to "blink and miss it." The HMR loop is what you actually feel.
Dev server time-to-ready
webpack 684 ms (Next reports: 260 ms)
turbopack 433 ms (Next reports: 294 ms)
This one is closer than I expected and worth being honest about. Webpack often reports "Ready" before it has compiled the routes you have not visited yet (lazy compilation). Turbopack precompiles more eagerly. The wall numbers above are from npm run dev to the first "Ready in" line, which is the user-visible signal.
The real Turbopack win is what happens after that first "Ready." On a route navigation or a file edit, Turbopack invalidates and recompiles in tens of milliseconds where webpack often took hundreds. The blog post I linked at the top cites Vercel's number of roughly 87% faster dev startup on real Next 16 apps once you account for typical edit-and-reload cycles, not just first boot.
Lockfile, dependency tree, and disk
| Old | New | Delta | |
|---|---|---|---|
| Lockfile | package-lock.json 669 KB | bun.lock 416 KB | -38% |
| Top-level packages | 1,362 | 875 | -36% |
| Total packages (incl. nested) | 2,205 | 1,762 | -20% |
| node_modules size | 1.3 GB | 1.6 GB | +23% |
The dependency tree shrank significantly in count, but the size on disk grew. This is the most counterintuitive finding of the migration and worth dwelling on, because it cuts against the easy "Rust toolchain is leaner" narrative.
What's happening: native Rust and Go binaries ship pre-compiled per platform. @biomejs/biome ships about 50 MB per platform. @typescript/native-preview ships the tsgo Go binary at ~150 MB. @tailwindcss/oxide (Lightning CSS) is another ~40 MB. Bun also installs platform binaries for next/swc even though it's the same swc; the npm install picks just one and bun grabs all of them by default. Add it up and the new toolchain costs about 300 MB more on disk than the old one, despite needing 36% fewer packages.
Where the cost actually matters: install time (where bun amortizes parallel binary downloads) and supply-chain surface area (where 875 npm packages with maintainers I do not know is half as many as 1,362). The disk bytes are the cheapest part of the equation. SSDs are huge and getting cheaper. Maintainer trust does not get cheaper.
What this means in practice
Three things shifted in how I work after the migration.
Pre-commit checks are now free. I added a .githooks/pre-commit shell script that runs oxlint, Biome, and tsgo on every commit. Total runtime on this codebase: about 750 ms. That is not a hook you grumble through, it is a hook you barely notice. The full sequence used to take 7 to 10 seconds with the old toolchain, which is exactly slow enough to make people commit with --no-verify after the first day.
CI minutes get cheaper. GitHub Actions bills by the minute. The js job in .github/workflows/ci.yml used to spend roughly 12 to 18 seconds in the lint/typecheck/build sequence; now it spends about 4 to 6 seconds. On a public repo with frequent contributors that scales. On this private repo I switched the workflow to manual trigger only and let the pre-commit hook handle most local verification, which saves the entire workflow run for the cases where it is actually needed.
Supply-chain hardening became practical. With the dev toolchain leaving the npm graph, the surface area shrank meaningfully. Combined with bun's default postinstall blocking, OSV-Scanner in CI, and bun audit plus cargo audit running on every push when CI is on, I can see every advisory affecting the project in one screen. The current count: 6 transitive advisories, all upstream-blocked (the web3 stack via @coinbase/x402, @walletconnect, and the russh dep tree on the Rust side). They surface as warnings; none are actionable from my side. That is the right state to be in.
What did not improve
For honesty's sake, here are the parts of the migration that did not pay off, or where the wins are smaller than the marketing suggests.
Disk usage went up, not down. Discussed above. If you are working on a machine with constrained disk (an old Air, a CI runner with quota), this matters.
Initial dev startup is only 1.6x faster. The HMR loop and edit-recompile cycle are dramatically better, but if you are someone who runs bun run dev once and leaves it open all day, the boot-time win is small. Turbopack pays its dividends incrementally.
Production build is 1.9x faster, not 5x. The Vercel marketing materials cite up to 5x for some workloads, but this depends heavily on what your build is doing. A site with lots of MDX (like this one) spends a meaningful chunk of build time in remark and rehype plugins running pure JavaScript, and Rust does not help there. If your bottleneck is webpack's bundle pass on TypeScript, the speedup is closer to the marketing number. If it is content processing, you save less.
Some advisories will not go away. Three of the npm advisories (ws, postcss, bn.js) and three of the cargo ones (lru, paste, rsa) are stuck waiting for upstream packages I do not control to bump. The migration improved my visibility into them but did not remove them. Anyone telling you their Rust toolchain migration "fixed all advisories" is lying or has no transitive deps.
The codemods only get you 70% of the way. bunx @tailwindcss/upgrade crashed on this codebase mid-flight because of an @apply border-border ordering issue in globals.css. The shadcn-style HSL theme indirection had to be done manually with @theme inline. Plugin migrations (tailwindcss-animate to tw-animate-css) are not automated. Plan on doing manual cleanup after every codemod, even when the docs imply it is one command.
Try it on your own project
Drop the script below into scripts/bench.sh (or wherever you like) and chmod +x it. It works on any Node-ish project; just adjust the bun run calls to match your scripts. The plain-time fallback means you do not need hyperfine installed, but brew install hyperfine will give you cleaner numbers.
The recipe to capture both old and new on the same machine:
# new toolchain numbers
bash scripts/bench.sh --full
# old toolchain numbers (after checking out the pre-migration commit)
git stash
git checkout <pre-migration-sha>
rm -rf node_modules bun.lock
npm install
bash scripts/bench.sh --full
git checkout main
rm -rf node_modules
bun installThe output is a markdown-shaped table you can paste straight into a writeup like this one.
The script
#!/usr/bin/env bash
# Toolchain benchmark.
# Measures install, lint, typecheck, build, and dev-server-ready timings on
# the checked-out tree. Re-run on an old commit (pre-migration) and on HEAD
# to compare. Output goes to stdout as a markdown table.
#
# Usage:
# scripts/bench.sh # Quick run (3 samples each, no install)
# scripts/bench.sh --full # Full run (5 samples, includes cold install)
set -e
cd "$(git rev-parse --show-toplevel)"
FULL=0
[[ "$1" == "--full" ]] && FULL=1
SAMPLES=3
[[ $FULL -eq 1 ]] && SAMPLES=5
HAS_HYPERFINE=0
command -v hyperfine >/dev/null 2>&1 && HAS_HYPERFINE=1
H=$'\033[33m'
R=$'\033[0m'
git_short=$(git rev-parse --short HEAD)
git_msg=$(git log -1 --pretty=%s | head -c 60)
echo "Benchmarking commit ${git_short}: ${git_msg}"
echo "Samples: ${SAMPLES}, hyperfine: $([ $HAS_HYPERFINE -eq 1 ] && echo yes || echo 'no (using time)')"
echo ""
run_bench() {
local name="$1"
local cmd="$2"
printf "${H}== %s ==${R}\n" "$name"
if [[ $HAS_HYPERFINE -eq 1 ]]; then
hyperfine --warmup 1 --runs $SAMPLES --show-output --shell=bash "$cmd" 2>&1 | tail -8
else
local total=0
for i in $(seq 1 $SAMPLES); do
local start=$(date +%s%N)
eval "$cmd" >/dev/null 2>&1 || true
local end=$(date +%s%N)
local elapsed=$(( (end - start) / 1000000 ))
total=$(( total + elapsed ))
printf " run %d: %d ms\n" $i $elapsed
done
printf " avg: %d ms\n" $(( total / SAMPLES ))
fi
echo ""
}
clean_next() { rm -rf .next; }
echo "== Lockfile + node_modules =="
LOCKFILE=$(ls -1 bun.lock package-lock.json pnpm-lock.yaml yarn.lock 2>/dev/null | head -1 || echo "(none)")
LOCK_SIZE=$(du -h "$LOCKFILE" 2>/dev/null | cut -f1 || echo "?")
NM_SIZE=$(du -sh node_modules 2>/dev/null | cut -f1 || echo "(no node_modules)")
NM_COUNT=$(find node_modules -mindepth 1 -maxdepth 2 -type d 2>/dev/null | wc -l | tr -d ' ' || echo "?")
echo "lockfile: $LOCKFILE ($LOCK_SIZE)"
echo "node_modules: $NM_SIZE"
echo "package count: $NM_COUNT"
echo ""
if [[ $FULL -eq 1 ]]; then
echo "${H}== Cold install (rm -rf node_modules) ==${R}"
rm -rf node_modules
if command -v bun >/dev/null 2>&1; then
PKG_CMD="bun install"
elif command -v pnpm >/dev/null 2>&1; then
PKG_CMD="pnpm install"
else
PKG_CMD="npm install"
fi
echo "running: $PKG_CMD"
/usr/bin/time -p $PKG_CMD 2>&1 | tail -5
echo ""
fi
run_bench "lint (biome check .)" "bun run lint"
run_bench "lint:fast (oxlint)" "bun run lint:fast"
run_bench "typecheck" "bun run typecheck"
echo "${H}== build (cold, removes .next first) ==${R}"
clean_next
/usr/bin/time -p bun run build > /tmp/bench-build.log 2>&1 || cat /tmp/bench-build.log
grep -E "real |compiled|next build" /tmp/bench-build.log | tail -5
echo ""
echo "${H}== dev server time-to-ready ==${R}"
clean_next
START=$(date +%s%N)
bun run dev > /tmp/bench-dev.log 2>&1 &
DEV_PID=$!
for i in $(seq 1 60); do
if grep -q "Ready in" /tmp/bench-dev.log 2>/dev/null; then
END=$(date +%s%N)
READY_MS=$(( (END - START) / 1000000 ))
READY_LINE=$(grep "Ready in" /tmp/bench-dev.log | head -1)
echo "dev ready: ${READY_MS}ms (next reports: ${READY_LINE})"
break
fi
sleep 0.5
done
kill $DEV_PID 2>/dev/null || true
wait 2>/dev/null || true
echo ""It is intentionally one file with no dependencies. If you want to extend it (add HMR latency, RSS, memory footprint), the run_bench helper is the only piece you need to reuse.
What I would change if I were doing this again
- Run the codemods first, separately, on a clean branch. I bundled the codemod with the manual cleanup and ended up with one big commit instead of a clean codemod checkpoint plus a manual cleanup commit. Keep them separate so you can revert just the manual part if you mess up.
- Capture before-numbers up front. I had to check out the pre-migration commit and reinstall to get the old numbers retroactively. Running the bench on the pre-migration commit before starting saves an hour of branch swapping later.
- Move ESLint last, not first. I kept ESLint as a "backstop" through most of the migration, which meant carrying its 60 dependencies and one extra CI step longer than necessary. After verifying Biome's Next-rule coverage, dropping ESLint was a one-line PR I should have done day one.
- Skip the formatter PR until you are ready for the diff. Enabling Biome's formatter created a 1,440-line cosmetic diff. Worth doing, but worth doing as its own commit on a quiet day, not on top of a Tailwind migration.
- Set up the pre-commit hook on day one. The faster the local verification loop is, the more aggressively you iterate. I left this until the end and immediately wished I had done it first.
Cited sources
- "Rust Owns the JavaScript Toolchain in 2026", the case for the migration
- Lee Robinson's "Rust Is Eating JavaScript", the original 2026 update
- Next.js 16 release notes for Turbopack defaults
- Tailwind CSS v4 announcement for the Oxide engine
- Biome v2 release post for type-aware lint claims
- Oxlint 1.0 stable release for the 50 to 100x ESLint claim
- TypeScript 7.0 beta announcement for tsgo
Built by Agnel Nieves, a design engineer with 15+ years across product, design systems, and crypto. The full migration log lives at /TOOLCHAIN.md. More writing on the blog.


