The basic idea
For a static asset that does not change between deploys, the server has two options. It can compress the asset on every request that asks for compression, paying the CPU cost each time. Or it can compress the asset once at build time, store the compressed form on disk next to the original, and have the server serve the precompressed file when a matching request arrives. The second option is precompression.
Precompression's appeal is that the work is done once, by a build agent that has time to spare, at the highest compression level the algorithm allows. Brotli at level 11 takes minutes per megabyte but produces noticeably smaller output than Brotli at level 4. Brotli at level 4 is what an on-the-fly server can typically afford. Pre-compressing the build closes that gap.
When precompression pays off
The pattern is worth setting up when all of the following hold:
- The asset is part of a build artefact, served as-is until the next deploy.
- The asset is text-shaped (HTML, CSS, JavaScript, JSON, XML, SVG, web fonts that are not already woff2, source maps).
- The asset is large enough or requested often enough that compression matters — a single 200-byte JSON manifest is not the target.
- The serving stack has a feature like Nginx's
gzip_static/brotli_static, Apache'smod_negotiation-with-MultiViews, or a CDN that respects pre-existingContent-Encodingon origin responses.
It is not worth setting up for content that is already compressed (woff2 fonts, JPEG/PNG/WebP/AVIF images, MP4 video), small inline configuration, or per-user dynamic responses.
What to compress and at what level
A reasonable default for a build pipeline:
- gzip: level 9 on every text asset over a few hundred bytes. Decoders are universal.
- Brotli: level 11 on every text asset over a few hundred bytes. Required for any modern site that wants the best ratio.
- Zstandard: optional. Worth doing if your edge supports it; level 19 is the practical maximum.
Files smaller than roughly 150–300 bytes often grow under compression due to algorithm framing overhead; many build scripts skip them entirely. The exact threshold depends on the algorithm and the file's compressibility — a 100-byte SVG of pure repetitive XML still shrinks; a 100-byte already-minified JS bundle does not.
Build-time precompression script
#!/bin/sh
# Precompress every text asset in dist/ at the highest practical level.
set -eu
ROOT="${1:-dist}"
find "$ROOT" -type f \( \
-name "*.html" -o -name "*.css" -o -name "*.js" -o \
-name "*.json" -o -name "*.svg" -o -name "*.xml" -o \
-name "*.txt" -o -name "*.map" \
\) -size +200c -print0 |
while IFS= read -r -d '' f; do
gzip -9 -k -f -- "$f" # produces $f.gz
brotli -q 11 -k -f -- "$f" # produces $f.br
done
The -k flag in both tools keeps the original file alongside the compressed copies, which the server needs in order to serve a client that does not accept either encoding (Accept-Encoding: identity).
Serving the variants: Nginx
Nginx ships with the ngx_http_gzip_static_module, which serves a precompressed .gz file when one exists. For Brotli, the third-party ngx_brotli module provides brotli_static. The two work the same way: when a request arrives for foo.css with Accept-Encoding: br, Nginx checks for foo.css.br on disk and serves that, setting Content-Encoding: br and the appropriate Vary header.
Nginx static-asset block
location ~* \.(css|js|html|svg|xml|json)$ {
gzip_static on;
brotli_static on;
# Long cache for hashed asset filenames.
expires 1y;
add_header Cache-Control "public, immutable";
# Make sure intermediaries respect per-encoding entries.
add_header Vary Accept-Encoding;
}
Two things commonly trip people up here. First, brotli_static on; only works if the ngx_brotli module is compiled into the Nginx binary — the official packages on most distributions do not include it; the Nginx guide covers the install path. Second, the directives serve precompressed files only; if no precompressed variant exists they fall back to whatever your gzip on; / brotli on; directives say. That fallback is intentional and almost always what you want.
Serving the variants: Apache
Apache does not have an exact equivalent to gzip_static; the closest pattern uses mod_rewrite to map a request to its precompressed sibling when the request advertises the matching encoding. Apache then serves the .gz or .br file with the correct Content-Encoding set by mod_mime's AddEncoding.
Apache .htaccess for precompressed assets
<IfModule mod_rewrite.c>
RewriteEngine On
# Brotli — only if the client accepts br and the file exists.
RewriteCond %{HTTP:Accept-Encoding} \bbr\b
RewriteCond %{REQUEST_FILENAME}.br -s
RewriteRule ^(.+)$ $1.br [L]
# Gzip fallback.
RewriteCond %{HTTP:Accept-Encoding} \bgzip\b
RewriteCond %{REQUEST_FILENAME}.gz -s
RewriteRule ^(.+)$ $1.gz [L]
</IfModule>
<FilesMatch "\.css\.br$">
AddType text/css .br
AddEncoding br .br
</FilesMatch>
<FilesMatch "\.js\.br$">
AddType application/javascript .br
AddEncoding br .br
</FilesMatch>
# Repeat the same pattern for .gz with AddEncoding gzip.
Header append Vary Accept-Encoding
The Apache guide covers the full set of FilesMatch blocks for every text type and the interaction with mod_deflate, which should be disabled for any path served through the precompressed variant to avoid double work.
Serving the variants: from a CDN
A CDN-fronted site can use precompression in either of two ways. In a passthrough configuration the origin holds .br and .gz alongside the originals; the CDN fetches whichever variant matches the client's Accept-Encoding and caches it. In an upload-driven configuration (Amazon S3 with CloudFront, Google Cloud Storage with Cloud CDN), the build uploads pre-compressed objects directly and the bucket serves them with the correct Content-Encoding metadata set on the object. The CDN guide covers the per-vendor mechanics.
Pipeline patterns that work
Three patterns dominate in real build systems:
- Post-build script. The bundler (Webpack, Vite, Rollup, esbuild) writes its output to
dist/, and a separate shell or Node script walks the directory and runsgzipandbrotli. Simple, easy to debug, easy to tune which file types are included. - Bundler plugin. Most major bundlers have a plugin (
vite-plugin-compression,compression-webpack-plugin, etc.) that emits compressed siblings as part of the build. The advantage is that the plugin sees the bundler's content hashes and can match them; the disadvantage is one more plugin to track for security updates. - CDN-side at upload. Object-storage CDNs accept compressed objects with explicit
Content-Encodingmetadata. The build uploadsmain.abcd.cssandmain.abcd.css.bras two separate objects and points routing rules at the right one based onAccept-Encoding.
Common mistakes
- Forgetting the original. A pipeline that overwrites
foo.csswith the gzip output instead of producingfoo.css.gzalongside it leaves no fallback for clients that do not accept any encoding. Always keep the source. - Compressing what is already compressed. Re-running
brotlion a.woff2font produces a slightly larger file and confuses the server. Skip already-compressed types in the build script. - Mismatched modification times. Some servers compare timestamps between the source and the precompressed variant and ignore the variant if the source is newer. Build scripts should
touchthe variants to match the source after writing them. - Cache-Control mismatches. If the precompressed and original variants have different
Cache-Controlheaders, intermediaries can store inconsistent state. Set caching headers on the location, not the file extension. - Skipping the
Varyheader. Even with precompression,Vary: Accept-Encodingis required so caches keep per-encoding entries separate. The Vary guide walks through the cache-key mechanics.
Verifying it works
Use curl to confirm the precompressed variant is being served:
$ curl -H "Accept-Encoding: br" -I https://example.com/main.css
HTTP/2 200
content-type: text/css
content-encoding: br
vary: Accept-Encoding
content-length: 4827
If the response comes back uncompressed, walk through the checks in the troubleshooting guide: confirm the .br file exists on disk, confirm the server has the static-serving module loaded, and confirm no upstream proxy is stripping the Content-Encoding header.
What does not need precompression
Dynamic responses, per-user pages, error responses generated on the fly, and anything served by a serverless function that hands you a string and expects you to compress it. For those, on-the-fly compression at a moderate level is the right answer; the performance optimisation page covers level selection.