The modern web is drowning in bloat-mega-byte JavaScript bundles nobody asked for, weighty CSS frameworks used for a handful of rules, and fonts stuffed with glyphs you'll never render. We think that's nonsense. If a site's core purpose is to deliver content, every extra byte that crosses the wire is wasted bandwidth, wasted time and wasted energy.
For these reasons, we engineered a container-driven build pipeline that treats optimization as a first-class citizen. This pipeline builds the basis for the website you are looking at right now.
In a single reproducible run, the pipeline:
The output is served using a custom Nginx server that understands Brotli and Zstandard static files out of the box. This results in deterministic builds, sub-100 ms time-to-first-byte, and a perfect Lighthouse score, all while keeping our hosting bill below 5€. In short, this isn't just another "good enough" build system. We explicitly reject web bloat and ship sites that respect our users' bandwidth, battery, patience and privacy.
The Dockerfile uses 3 stages, the dependencies, the builder and the runner. The separation means that touching a Markdown article or template rarely invalidates the expensive optimization steps. Docker merely replays the cached layers and is done in a few seconds. At deploy time we copy only the /out directory and the minimal Nginx config, so the final image contains zero compilers, zero Node dependencies, and only the bytes your visitors will actually download.
In the following sections, we build and explain the Dockerfile step-by-step. At the end, we will post the complete Dockerfile for your reference.
# Dependency installer
FROM node:22-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ciThe dependencies stage is deliberately spartan: we mount only the two lockfiles and run npm ci. Using ci instead of install guarantees that the exact versions pinned in your lockfile are the ones that end up in the image. This has three benefits:
node_modules tree. This keeps diffs clean and shrinkwrap audits stable.The result is a minimal self-contained layer, which is built once and reused by every subsequent step. We use node22 here, but you can use which ever version you prefer.
We now enter the builder of the Dockerfile. This is where the heavy lifting happens. This stage copies the entire source, runs Eleventy, and then executes a gauntlet of optimizers: PurgeCSS, PostCSS, html-minifier-terser, Terser, pyftsubset, and a trio of compressors (Zopfli, Brotli, Zstd). Because this tooling never ships to production, we are free to use Python, Node, and Alpine packages without bloating the final image.
With our dependencies gathered, we instruct Eleventy to materialise our content in /app/out.
# Builder
FROM node:22-alpine AS builder
WORKDIR /app
# --- Tool-chain -----------------------------------------------------------
RUN apk add brotli zstd py3-fonttools py3-brotli pup py3-beautifulsoup4
RUN npm install postcss postcss-cli autoprefixer cssnano postcss-calc purgecss
# --- Build ---------------------------------------------------------------
COPY /app/node_modules ./node_modules
COPY . .
RUN npm run make # Eleventy outputs to /app/outAgain, we use node22 here, but you can use any version you prefer. Then, we install a bunch of programs and libraries with apk:
We use npm to install some optimizers we use later to minify and optimize CSS, JavaScript and HTML.
After that we prepare everything to actually build the website. First, we copy the node_modules from the dependencies layer we cached earlier. Second, we bring in templates, Markdown, configs, images and other assets - everything we need to build the website using Eleventy. Finally we run npm run make to execute the Eleventy build script. The result is a fully-rendered static site in /app/out but still untouched by any optimizations. Every HTML file still references full CSS bundles, original fonts, and unminified JavaScript.
Now we get to the interesting part. After we generated our static assets, we strip them down, optimize, and compress them as much as possible. We start with PurgeCSS to tree-shake our CSS files.
RUN echo "module.exports = {" \
" content: ['/app/out/**/*.html']," \
" css: ['/app/out/**/*.css']," \
" output: '/app/out/css'," \
" variables: true," \
" keyframes: true," \
" defaultExtractor: content => content.match(/[\w-/:]+(?<!:)/g) || []" \
"};" > /app/config.js && \
npx purgecss --config /app/config.jsHere, we echo a tiny config.js instead of committing a permanent file to the repository. The content glob points PurgeCSS at every generated HTML file, while the css glob tells it which stylesheets to purge. We make PurgeCSS extra-careful and ensure it preserves custom properties and animation keyframes even if they look unused. Finally, we use a custom extractor which captures utility-class patterns such as xl:flex or hover:border that standard extractors miss.
With only the used selectors left, we hand the stylesheet to PostCSS for two finishing passes: browser-compatibility and micro-minification.
RUN npx postcss /app/out/**/*.css \
--use autoprefixer cssnano postcss-calc \
--replace --verbose --no-mapThe three plugins we use are:
calc() references whenever it's possible.With CSS trimmed, we reduce the raw size of markup and scripts. We install two CLI tools - html-minifier-terser for HTML and terser for JavaScript and sweep every file in /app/out with a pair of find | xargs one-liners. Hereby, html-minifier-terser collapses whitespace, sorts attributes to increase the number of common substrings in the output, strips comments, compresses inline CSS and JS, and rewrites URLs. terser performs dead-code elimination, multiple compression passes, and aggressive mangling.
RUN npm install --no-save html-minifier-terser terser
ENV PATH="/app/node_modules/.bin:${PATH}"
RUN find /app/out -type f -name "*.html" -print0 | \
xargs -0 -I{} sh -c \
'html-minifier-terser \
--collapse-whitespace \
--sort-attributes \
--remove-comments \
--remove-empty-attributes \
--remove-attribute-quotes \
--remove-redundant-attributes \
--remove-script-type-attributes \
--remove-style-link-type-attributes \
--minify-css true \
--minify-js true \
--sort-class-name \
--decode-entities \
--use-short-doctype \
--minify-urls true \
-o "{}" "{}"'
RUN find /app/out -type f -name "*.js" -print0 | \
xargs -0 -I{} sh -c \
'terser "{}" \
--compress passes=3,unsafe=true,unsafe_arrows=true,unsafe_methods=true,drop_console=true \
--mangle toplevel,properties \
--ecma 2024 \
-o "{}"'After we optimized CSS, HTML and JS, we cut down on other static assets: The fonts. A typical variable font file can weigh up to 400KB - larger than your entire HTML payload. The worst part about this is, that only a tiny slice of its 1000+ glyphs ever render on-screen. Subsetting tackles this by trimming every unused code-point and OpenType table until only the required glyphs remain.
"Why bother?", you might ask. The answer is simple: Because fonts block first paint. Even with font-display: swap, a 300KB WOFF2 still delays your reader from reading text. Shrink it to 15KB and the FOUT flash disappears! Your critical path just got shorter without compromising on brand typography.
In our stack, we use the Lexend font family. Specifically, Lexend Regular for regular text and Lexend Semibold for headings. Both arrive from Google Fonts at ~38KB each, but the average blog post uses maybe 5% of their glyph repertoire. Here's how we strip them down:
glyph_extractor.py (linked below) crawls every HTML file in /app/out. It accumulates a per-font set of glyphs actually referenced.
def collect_glyphs(html_dir):
regular = list()
semibold = list()
# find all HTML files under html_dir
for path in glob.glob(os.path.join(html_dir, '**', '*.html'), recursive=True):
print()
with open(path, 'r', encoding='utf-8') as f:
soup = BeautifulSoup(f, 'html.parser')
# iterate every text node
for text_node in soup.find_all(string=True):
text = text_node.strip()
if not text:
continue
# gather ALL ancestor classes
classes = set()
for anc in text_node.parents:
if anc.has_attr('class'):
classes.update(anc['class'])
is_smallcaps = 'smallcaps' in classes
is_semibold = 'font-semibold' in classes
heading_parent = text_node.find_parent({'h1','h2','h3','h4','h5','h6'})
if heading_parent and heading_parent.find_parent(class_='prose'):
is_semibold = True
# remove ALL whitespace
cleaned = re.sub(r'\s+', '', text)
if not cleaned:
continue
# classify
if not is_smallcaps and not is_semibold:
# plain text → regular
for c in cleaned:
if unicodedata.category(c) == 'Cf':
continue
if c not in regular:
regular.append(c)
elif is_smallcaps and not is_semibold:
# smallcaps → uppercase → regular
for c in cleaned.upper():
if unicodedata.category(c) == 'Cf':
continue
if c not in regular:
regular.append(c)
elif not is_smallcaps and is_semibold:
# semibold → preserve case → semibold
for c in cleaned:
if unicodedata.category(c) == 'Cf':
continue
if c not in semibold:
semibold.append(c)
else: # both smallcaps & semibold
# uppercase → semibold
for c in cleaned.upper():
if unicodedata.category(c) == 'Cf':
continue
if c not in semibold:
semibold.append(c)
return regular, semibold
def dump_set(chars, out_path):
# write the chars in sorted (Unicode code-point) order,
# concatenated into a single line
with open(out_path, 'w', encoding='utf-8') as f:
for c in sorted(chars):
f.write(c)
if __name__ == '__main__':
html_dir = '/app/out'
regular, semibold = collect_glyphs(html_dir)
dump_set(regular, '/regular.txt')
dump_set(semibold, '/semibold.txt')
print(f"Written {len(regular)} glyphs to /regular.txt")
print(f"Written {len(semibold)} glyphs to /semibold.txt")The script emits two plain text lists that contain the glyphs actually used on the website: /regular.txt and /semibold.txt. You can think of them as whitelists. We feed those lists into pyftsubset to subset the fonts.
RUN pyftsubset ./layout/font/lexend/Lexend-SemiBold-Font.woff2 \
--output-file=/app/out/font/lexend/Lexend-SemiBold-Font.woff2 \
--flavor=woff2 \
--no-hinting \
--text-file=/semibold.txt
RUN pyftsubset ./layout/font/lexend/Lexend-Regular-Font.woff2 \
--output-file=/app/out/font/lexend/Lexend-Regular-Font.woff2 \
--flavor=woff2 \
--no-hinting \
--text-file=/regular.txt--flavor=woff2 keeps the modern container format for maximum compression.--no-hinting drops outdated TrueType hints; modern rasterisers don't need them.This strips the font files down from ~38KB to ~8KB and ~5KB. If you want to use the same technique in your setup you have to adapt the script to the fonts and classes you use in your HTML. Alternatively, you can send us an email and we work it out with you.
Browsers are fantastic at keeping assets around, but they need to know when those assets change. Cache busting solves this problem by appending a content-based hash to every filename. If a file's bytes change its name does too, instantly invalidating the old copy without touching server headers or query-string hacks.
Our cache_buster.py script (shown below) runs after all other transformations so the hash reflects the final bytes on disk. It:
ASSET_EXTS = {'.avif', '.js', '.css', '.woff2'}
SEARCH_EXTS = {'.html', '.js', '.css'}
HASH_LEN = 8
def compute_hash(path: Path, block_size: int = 65536) -> str:
"""Return SHA256 hex digest of path"""
h = hashlib.sha256()
with path.open('rb') as f:
for chunk in iter(lambda: f.read(block_size), b''):
h.update(chunk)
return h.hexdigest()[:HASH_LEN]
def build_asset_map(root: Path):
"""Return dict {old_rel_path: new_rel_path}, renaming files on disk."""
mapping = {}
for path in root.rglob('*'):
if path.is_file() and path.suffix.lower() in ASSET_EXTS:
rel = path.relative_to(root)
digest = compute_hash(path)
new_name = f"{path.stem}.{digest}{path.suffix}"
new_path = path.with_name(new_name)
os.rename(path, new_path)
old_path = str(rel).replace('\\', '/')
new_path = str(new_path.relative_to(root)).replace('\\', '/')
mapping[old_path] = new_path
return mapping
def update_references(root: Path, mapping: dict):
"""Replace occurrences of each *old* path with *new* path within SEARCH_EXTS files."""
if not mapping:
return
# Build one big regex from all keys, longest first to avoid substr collisions
escaped = sorted((re.escape(k) for k in mapping.keys()), key=len, reverse=True)
pattern = re.compile('|'.join(escaped))
for path in root.rglob('*'):
if path.is_file() and path.suffix.lower() in SEARCH_EXTS:
text = path.read_text(encoding='utf-8', errors='ignore')
new_text = pattern.sub(lambda x: mapping[x.group(0)], text)
if new_text != text:
path.write_text(new_text, encoding='utf-8')
def main():
html_root = '/app/out'
root = Path(html_root).resolve()
if not root.is_dir():
sys.exit(f"{root} is not a directory")
mapping = build_asset_map(root)
update_references(root, mapping)
print("Cache busting complete. Renamed files:")
for old, new in mapping.items():
print(f" {old} → {new}")
if __name__ == '__main__':
main()
Because the files are now immutable, our Nginx config can safely send:
Cache-Control: public, max-age=31536000, immutable
Which tells browsers and CDNs alike to hold on to those bytes for a year. When you redeploy a new version of the asset, the hash changes, the URL changes, and the client grabs the fresh version automatically. If you want to use this script in your own workflow you will probably have to tweak it for your needs.
A static site means every byte we serve is immutable between deploys, so there's no reason to waste CPU compressing it on the fly. Instead, we generate the three mainstream encodings during the build.
Gzip and Brotli have wide-spread support across browsers, CDNs, crawlers. Of these two, we use the smallest version. Zstandard compression support is experimental in Chromium 122+ and requires a flag in Firefox. Still, we support it to be future-proof but only keep a compressed version around if it outperforms Gzip or Brotli. Because we compress only at deploy-time and not on the fly, we can afford maximum settings (-15 for Zopfli, -Z for Brotli, --ultra-22 for Zstd). These settings are way too slow for real-time compression but perfectly acceptable at the deploy-stage.
RUN find /app/out -type f \
-exec zopfli -k -15 {} \; \
-exec sh -c 'gzipfile="{}.gz"; \
if [ -f "$gzipfile" ] && [ $(stat -c%s "$gzipfile") -ge $(stat -c%s "{}") ]; then \
rm "$gzipfile"; echo "$gzipfile discarded (larger than original)"; \
fi' \;
RUN find /app/out -type f \
! -name '*.gz' \
-exec brotli -k -Z {} \; \
-exec sh -c 'brotlifile="{}.br"; \
gzipfile="{}.gz"; \
origfile="{}"; \
# Check if Brotli is larger than the original file \
if [ -f "$brotlifile" ] && [ $(stat -c%s "$brotlifile") -ge $(stat -c%s "$origfile") ]; then \
rm "$brotlifile"; \
echo "$brotlifile discarded (larger than original)"; \
elif [ -f "$brotlifile" ] && [ -f "$gzipfile" ] && [ $(stat -c%s "$brotlifile") -ge $(stat -c%s "$gzipfile") ]; then \
rm "$brotlifile"; \
echo "$brotlifile discarded (larger than gzip)"; \
fi' \;
RUN find /app/out -type f \
! -name '*.gz' \
! -name '*.br' \
-exec zstd -k --ultra -22 {} \; \
-exec sh -c 'zstdfile="{}.zst"; gzipfile="{}.gz"; brotlifile="{}.br"; origfile="{}"; \
if [ -f "$zstdfile" ] && [ $(stat -c%s "$zstdfile") -ge $(stat -c%s "$origfile") ]; then \
rm "$zstdfile"; \
echo "$zstdfile discarded (larger than original)"; \
elif [ -f "$zstdfile" ] && [ -f "$gzipfile" ] && [ $(stat -c%s "$zstdfile") -ge $(stat -c%s "$gzipfile") ]; then \
rm "$zstdfile"; \
echo "$zstdfile discarded (larger than gzip)"; \
elif [ -f "$zstdfile" ] && [ -f "$brotlifile" ] && [ $(stat -c%s "$zstdfile") -ge $(stat -c%s "$brotlifile") ]; then \
rm "$zstdfile"; \
echo "$zstdfile discarded (larger than brotli)"; \
fi' \;
The commands for this look a bit daunting, but they just do what we explained earlier. They iterate over every file in /app/out, compressing them with Gzip, Brotli, and Zstandard, only keeping the results if they are strictly smaller than the files generated before or the original file.
Everything up to this point has been about creating and optimizing the website. The runner stage's only job is to deliver it without adding latency or attack surface. The runner contains nothing but the bytes you ship. It is a stripped-down Alpine base plus Nginx and its Brotli/Zstd modules. No Node, no Python, no GCC, just static files served with modern content-negotiation so browsers always pick the smallest available variant.
FROM alpine
RUN apk add brotli nginx nginx-mod-http-brotli nginx-mod-http-zstd
COPY /app/out /usr/share/nginx/html
COPY /app/nginx/nginx.conf /etc/nginx/nginx.conf
COPY /app/nginx/default.conf /etc/nginx/conf.d/default.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]Importantly, we need to make sure our webserver understands what to do with precompressed assets. In our case we use Nginx with the nginx-mod-http-brotli and nginx-mod-http-zstd modules. These pick the best variant based on the Accept-Encoding header. In our nginx.conf, we activate the static compression abilities as follows.
load_module modules/ngx_http_brotli_static_module.so;
load_module modules/ngx_http_zstd_static_module.so;
http {
gzip_static on; # .gz
brotli_static on; # .br (requires module above)
zstd_static on; # .zst (requires module above)
brotli off; # disable dynamic brotli
zstd off; # disable dynamic Zstd
}In our default.conf for nginx we set the caching headers for all assets as described above. These headers tell the client to cache fonts, CSS, JS and images for up to a year, while never caching any HTML files. Because of our cache busting script we will never serve any outdated files.
server {
location / {
add_header Cache-Control "no-cache, must-revalidate";
}
location ~* \.(avif|css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|otf)$ {
add_header Cache-Control "public, max-age=31536000, immutable";
access_log off;
}
}Shipping a lightning-fast site is not about sprinkling a CDN on top of gigabyte bundles; it's about questioning every byte from the moment it's authored to the instant it hits the wire. By chaining Eleventy with a multi-stage Docker build we turned that philosophy into a reproducible pipeline:
The payoff is real: our whole site is hosted on a 5€ server and achieves a perfect lighthouse score.

Faster first paint, lower bills, happier readers - and a workflow you can easily drop into any CI system.
Feel free to use the Dockerfile below, tweak the Nginx config, or swap Eleventy for your static site generator of choice. The principles stay the same: measure, remove, compress, cache. Your users will thank you.
Below is the multi-stage Dockerfile that orchestrates the entire build and deploy process for your reference.
# Dependency installer
FROM node:22-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Builder
FROM node:22-alpine AS builder
WORKDIR /app
##############################################################
##################Install Optimization Tools##################
##############################################################
RUN apk add brotli zstd py3-fonttools py3-brotli py3-beautifulsoup4
RUN npm install postcss postcss-cli autoprefixer cssnano postcss-calc purgecss
##############################################################
########################Build the Site########################
##############################################################
COPY /app/node_modules ./node_modules
COPY . .
RUN npm run make
##############################################################
########################CSS Purging###########################
##############################################################
RUN echo "module.exports = {" \
" content: ['/app/out/**/*.html']," \
" css: ['/app/out/**/*.css']," \
" output: '/app/out/css'," \
" variables: true," \
" keyframes: true," \
" defaultExtractor: content => content.match(/[\w-/:]+(?<!:)/g) || []" \
"};" > /app/config.js
RUN npx purgecss --config config.js
##############################################################
###########CSS Vendor Prefixing and Minification##############
##############################################################
RUN npx postcss /app/out/**/*.css --use autoprefixer cssnano postcss-calc --replace --verbose --no-map
##############################################################
#################HTML and JS Minification#####################
##############################################################
RUN npm install --no-save html-minifier-terser terser
ENV PATH="/app/node_modules/.bin:${PATH}"
# minify HTML
RUN find /app/out -type f -name "*.html" -print0 | \
xargs -0 -I{} sh -c \
'html-minifier-terser \
--collapse-whitespace \
--sort-attributes \
--remove-comments \
--remove-empty-attributes \
--remove-attribute-quotes \
--remove-redundant-attributes \
--remove-script-type-attributes \
--remove-style-link-type-attributes \
--minify-css true \
--minify-js true \
--sort-class-name \
--decode-entities \
--use-short-doctype \
--minify-urls true \
-o "{}" "{}"'
# minify JS
RUN find /app/out -type f -name "*.js" -print0 | \
xargs -0 -I{} sh -c \
'terser "{}" \
--compress passes=3,unsafe=true,unsafe_arrows=true,unsafe_methods=true,drop_console=true \
--mangle toplevel,properties \
--ecma 2024 \
-o "{}"'
##############################################################
#####################Font Subsetting##########################
##############################################################
RUN python glyph_extractor.py
# subsetting of the fonts
RUN pyftsubset ./layout/font/lexend/Lexend-SemiBold-Font.woff2 \
--output-file=/app/out/font/lexend/Lexend-SemiBold-Font.woff2 \
--flavor=woff2 \
--no-hinting \
--text-file=/semibold.txt
RUN pyftsubset ./layout/font/lexend/Lexend-Regular-Font.woff2 \
--output-file=/app/out/font/lexend/Lexend-Regular-Font.woff2 \
--flavor=woff2 \
--no-hinting \
--text-file=/regular.txt
##############################################################
######################Cache Busting###########################
##############################################################
RUN python cache_buster.py
##############################################################
####################Static Compression########################
##############################################################
# Step to generate gzip and Brotli compressed files
RUN find /app/out -type f \
-exec zopfli -k -15 {} \; \
-exec sh -c 'gzipfile="{}.gz"; \
if [ -f "$gzipfile" ] && [ $(stat -c%s "$gzipfile") -ge $(stat -c%s "{}") ]; then \
rm "$gzipfile"; echo "$gzipfile discarded (larger than original)"; \
fi' \;
RUN find /app/out -type f \
! -name '*.gz' \
-exec brotli -k -Z {} \; \
-exec sh -c 'brotlifile="{}.br"; \
gzipfile="{}.gz"; \
origfile="{}"; \
# Check if Brotli is larger than the original file \
if [ -f "$brotlifile" ] && [ $(stat -c%s "$brotlifile") -ge $(stat -c%s "$origfile") ]; then \
rm "$brotlifile"; \
echo "$brotlifile discarded (larger than original)"; \
elif [ -f "$brotlifile" ] && [ -f "$gzipfile" ] && [ $(stat -c%s "$brotlifile") -ge $(stat -c%s "$gzipfile") ]; then \
rm "$brotlifile"; \
echo "$brotlifile discarded (larger than gzip)"; \
fi' \;
RUN find /app/out -type f \
! -name '*.gz' \
! -name '*.br' \
-exec zstd -k --ultra -22 {} \; \
-exec sh -c 'zstdfile="{}.zst"; gzipfile="{}.gz"; brotlifile="{}.br"; origfile="{}"; \
if [ -f "$zstdfile" ] && [ $(stat -c%s "$zstdfile") -ge $(stat -c%s "$origfile") ]; then \
rm "$zstdfile"; \
echo "$zstdfile discarded (larger than original)"; \
elif [ -f "$zstdfile" ] && [ -f "$gzipfile" ] && [ $(stat -c%s "$zstdfile") -ge $(stat -c%s "$gzipfile") ]; then \
rm "$zstdfile"; \
echo "$zstdfile discarded (larger than gzip)"; \
elif [ -f "$zstdfile" ] && [ -f "$brotlifile" ] && [ $(stat -c%s "$zstdfile") -ge $(stat -c%s "$brotlifile") ]; then \
rm "$zstdfile"; \
echo "$zstdfile discarded (larger than brotli)"; \
fi' \;
# Runner
FROM alpine
RUN apk add brotli nginx nginx-mod-http-brotli nginx-mod-http-zstd
# copy files from the builder
COPY /app/out /usr/share/nginx/html
COPY /app/nginx/nginx.conf /etc/nginx/nginx.conf
COPY /app/nginx/default.conf /etc/nginx/conf.d/default.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]