Optimizing Eleventy Build Performance: From 38s to Under 10s

My Eleventy site builds 3,400+ pages with data from over a dozen APIs (GitHub, Funkwhale, Last.fm, YouTube, Mastodon, Bluesky, and several Indiekit endpoints). The initial build was taking 38 seconds, and incremental rebuilds — triggered by a single markdown file change — were nearly as slow. Here’s how I brought incremental rebuilds down to under 10 seconds and cut memory usage by 12.5%.

The Problem

Running DEBUG=Eleventy:Benchmark* revealed where time was being spent on incremental rebuilds:

Component Time Share
Data File 12,169ms 32%
Render 7,526ms 19%
dateDisplay filter 1,631ms 4%
aiPosts/aiStats 2,227ms 6%
hash filter ~850ms 2%

The biggest offender: every _data/*.js file re-executed on every rebuild, even when only a markdown file changed. That meant 15 network requests to various APIs firing on every save.

Fix 1: Memoize Expensive Filters

Nunjucks filters like dateDisplay, date, isoDate, and hash were being called thousands of times per build with the same inputs. A simple Map cache with clearing on eleventy.before eliminated redundant work:

const dateDisplayCache = new Map();
eleventyConfig.on("eleventy.before", () => dateDisplayCache.clear());

eleventyConfig.addFilter("dateDisplay", (value, format) => {
  const key = `${value}|${format}`;
  if (dateDisplayCache.has(key)) return dateDisplayCache.get(key);
  const result = formatDate(value, format);
  dateDisplayCache.set(key, result);
  return result;
});

Result: dateDisplay dropped from 1,631ms → 239ms (85% reduction). hash from 850ms → 8ms (99%).

Fix 2: Watch-Mode Cache Extension for Data Files

This was the big win. I created a shared cachedFetch helper that wraps @11ty/eleventy-fetch with two protections:

  1. Extended cache in watch mode — During development (ELEVENTY_RUN_MODE !== "build"), cache duration extends to 4 hours instead of the default 5-15 minutes
  2. AbortController timeout — 10-second hard timeout on all network requests to prevent slow APIs from hanging the build
import EleventyFetch from "@11ty/eleventy-fetch";

const FETCH_TIMEOUT_MS = 10_000;
const isWatchMode = process.env.ELEVENTY_RUN_MODE !== "build";
const WATCH_MODE_DURATION = "4h";

export async function cachedFetch(url, options = {}) {
  const duration = isWatchMode
    ? WATCH_MODE_DURATION
    : (options.duration || "15m");
  const controller = new AbortController();
  const timeoutId = setTimeout(() => controller.abort(), FETCH_TIMEOUT_MS);
  try {
    return await EleventyFetch(url, {
      ...options,
      duration,
      fetchOptions: { ...options.fetchOptions, signal: controller.signal },
    });
  } finally {
    clearTimeout(timeoutId);
  }
}

All 13 network-dependent data files were updated to use this helper. The key insight: @11ty/eleventy-fetch already has a file-based cache, but with short TTLs (5-15 minutes), it was expiring between rebuilds. Extending to 4 hours in watch mode means the cache is almost always warm during development.

Result: Data File went from 12,169ms → 28ms on incremental rebuilds — a 99.8% reduction.

Fix 3: Computed Data Memoization

The aiPosts and aiStats computed data files were recalculating across all posts on every rebuild. Adding memoization with cache clearing on eleventy.before brought these from 2,227ms combined down to 8ms.

The Results

Cumulative improvements on incremental rebuilds:

Component Before After Reduction
Data File 12,169ms 28ms 99.8%
dateDisplay 1,631ms 239ms 85.3%
date 621ms 11ms 98.2%
hash ~850ms 8ms 99.1%
aiPosts 1,044ms 3ms 99.7%
aiStats 1,183ms 5ms 99.6%

The container memory was also reduced from 4GB to 3.5GB. The V8 heap peaks at ~2,560MB during the initial full build (3,400 pages in memory), then settles to ~1,140MB steady state.

Lessons Learned

  1. Profile before optimizingDEBUG=Eleventy:Benchmark* is indispensable. Without it, I would have guessed wrong about what was slow.
  2. Data files are the hidden cost — They ALL re-execute on every rebuild, even if your change was a single markdown file. If they make network calls, that’s your bottleneck.
  3. @11ty/eleventy-fetch is your friend, but configure it — The default short TTLs are designed for production builds. In watch mode, you almost certainly want longer caches.
  4. Memoize filters that get called thousands of times — A 3,400-page site calls date formatting filters tens of thousands of times per build. Cache them.
  5. V8 heap ≠ container memory — RSS (physical memory) is higher than V8 heap due to native buffers and compiled code. Size your container for peak RSS of ALL processes combined.
AI: Text Co-drafted · Claude

Co-drafted based on optimization work done together across multiple sessions

Learn more about AI usage on this site

Comments

Sign in with your website to comment:

Signed in as
Send a Webmention

Have you written a response to this post? Send a webmention by entering your post URL below.