Dev Log: Hunting OOM kills, fixing CLS, and making Eleventy builds 10x faster
Three days of performance work on the Indiekit stack — memory optimization, Core Web Vitals, and build-time improvements. This one gets technical.
The Problem: OOM Kills in a 3 GB Container
The site runs on Cloudron in a container with 3 GB RAM. Eleventy builds the site (~2,350 pages), generates OpenGraph images via a WASM-based renderer (Satori + Resvg), processes link unfurls for 545 interaction URLs, and runs a file watcher for incremental rebuilds. All in one Node.js process.
After the bookmark import brought the site to 2,350+ pages, builds started getting OOM-killed. The watcher process alone consumed ~1.8 GB RSS at idle, leaving barely enough headroom for OG image generation.
Memory Optimization: V8 Heap Snapshots and Batch Spawning
Step 1: Instrument everything
Added --expose-gc and --heapsnapshot-signal=SIGUSR2 to the Node.js watcher. This enabled two things:
- Post-build garbage collection —
global.gc()in Eleventy’safterevent, returning freed V8 heap pages to the OS immediately instead of waiting for V8’s lazy GC - On-demand heap snapshots —
kill -USR2 <pid>dumps a V8 heap snapshot to/tmpfor analysis in Chrome DevTools
Added a background memory monitor logging RSS + swap every 10 minutes so we could track memory trends over time.
Step 2: Heap snapshot analysis
The snapshots revealed the actual memory consumers:
| Consumer | Size | Notes |
|---|---|---|
| Rendered HTML pages (watch mode) | 682 MB | V8 retains all page content for incremental rebuilds |
| eleventy-img buffers | 170 MB | Cached image metadata |
| og-cli WASM native memory | ~2 GB peak | Satori + Resvg allocations outside V8 heap |
The killer was og-cli: WASM native memory from Satori and Resvg grows outside V8’s managed heap, meaning global.gc() can’t reclaim it. A full OG regeneration (2,350 images) would steadily consume memory until the container was killed.
Step 3: Batch spawning
Solution: spawn og-cli as a child process in batches of 100 images. Each invocation exits after its batch, fully releasing all WASM native memory. Exit code 2 signals “more work remains” and the spawner re-loops.
Before: 1 process × 2,350 images → 2+ GB peak → OOM kill
After: 24 batches × 100 images → ~500 MB peak per batch → completes reliably
Same pattern applied to the unfurl pre-fetch — replaced an unbounded Promise.all on 545 URLs with batches of 50, with GC calls between batches.
Step 4: Heap tuning
Through trial and error with actual build measurements:
- Initial build needs ~2 GB heap (all 2,350 pages rendered at once)
- Watch mode settles at ~1.8 GB (retains rendered content for incremental rebuilds)
- Watcher heap cap set to 2,560 MB to allow headroom for watch-mode overhead
Core Web Vitals: CLS from 1.0 to Near-Zero
The skeleton loader disaster
Added a skeleton loader to prevent Flash of Unstyled Content (FOUC). PageSpeed Insights reported CLS of 0.916 mobile / 1.004 desktop — the skeleton-to-content swap was itself the biggest layout shift on the page. Removed it. Critical CSS already provides correct first-paint layout.
Desktop CLS (0.57)
Three root causes found via PageSpeed layout shift diagnostics:
-
Grid mismatch (CLS 0.495) — Critical CSS used
2fr 1frbut Tailwind compiled torepeat(3, minmax(0, 1fr))withgrid-column: span 2. The browser rendered the critical CSS grid, then re-laid everything when the full stylesheet loaded. Fixed by matching critical CSS to Tailwind’s exact output. -
Font swap FOUT (CLS 0.074) —
@font-facedeclarations were only in the deferred stylesheet. Moved to critical CSS withfont-display: optionaland added<link rel="preload">for weights 400/600/700. -
Avatar resize — HTML
width/heightwas 96×96 but CSS setsm:w-32/h-32(128px) on desktop. Updated attributes to match.
Build-Time Optimization: Memoization
Profiling revealed several filters and transforms running thousands of redundant operations:
| Optimization | Before | After |
|---|---|---|
hash filter (cache-busting) |
55,332 file reads/build | 16 (one per unique file) |
aiPosts / aiStats filters |
694 calls × 2,350 posts = 1.6M iterations | 1 call (cached) |
| OG directory listing | 3,426 readdirSync calls |
1 (cached) |
| PostHTML transform | Ran on every page (~3ms each) | Skipped for pages without <img> tags |
All caches clear on eleventy.before so incremental rebuilds stay correct.
New Features
Table of Contents widget
Articles and long notes now get a floating TOC widget built with Alpine.js. It scans .e-content headings at page load, builds a dynamic table of contents, and highlights the current section via IntersectionObserver scroll spy. Only appears on pages with 3+ headings.
Microsub: feed type indicator and cross-channel duplicate detection
The Microsub reader now shows whether each feed item comes from RSS, ActivityPub, or Mastodon, and deduplicates items that appear in multiple channels.
Infrastructure
- nginx cache scoping — cache headers now target only Eleventy static paths, not Indiekit’s dynamic routes
- eleventy-fetch cache preservation — stopped wiping the fetch cache on deploy (was forcing full rebuilds)
- IndieAuth patch tracking — the
indieauth.jsregex patch is now tracked in the repo instead of applied at build time - Bluesky syndicator — fixed image upload limit to Bluesky’s actual 1,000,000 byte cap
Total: 38 commits across the theme and deployment repos, plus fixes in the Bluesky syndicator and Microsub endpoint. The site now builds reliably within its 3 GB container, scores well on Core Web Vitals, and the Eleventy build pipeline is significantly faster.
AI: Text Co-drafted · Claude
Co-drafted with Claude Code — commit data gathered automatically, narrative and technical analysis written by AI from commit messages and code changes

Comments
Sign in with your website to comment:
Loading comments...
No comments yet. Be the first to share your thoughts!