Performance Optimization

Last reviewed on 2026-04-28

Maximize website speed with optimal HTTP compression strategies

Compression Performance Metrics

Compression Ratio

  • Gzip: 60-80% reduction for text
  • Brotli: 15-25% better than gzip
  • Zstandard: Similar to Brotli, faster
  • Deflate: Similar to gzip

Compression Speed

  • Zstandard: Fastest compression
  • Gzip: Good balance
  • Brotli: Slower compression
  • Higher levels = slower speed

Decompression Speed

  • All algorithms: Fast decompression
  • Zstandard: Slightly faster
  • Client CPU impact: Minimal
  • Mobile considerations important

Optimization Strategies

Content-Type Optimization

  • Text files: HTML, CSS, JS, JSON, XML - Always compress
  • Images: Already compressed - Skip compression
  • Fonts: WOFF/WOFF2 pre-compressed - Skip
  • Media: Video/audio pre-compressed - Skip
  • Small files: Under 1KB - Consider skipping

Compression Level Selection

Recommended Compression Levels

  1. Gzip Level 6: Default, good balance
  2. Brotli Level 4: Dynamic content
  3. Brotli Level 11: Static content (pre-compress)
  4. Zstandard Level 3: Real-time compression
  5. Zstandard Level 19: Static content

Pre-compression Strategy

Build-time Compression

# Pre-compress static assets during build
find ./dist -type f \( -name "*.js" -o -name "*.css" -o -name "*.html" \) \
  -exec gzip -9 -k {} \; \
  -exec brotli -9 -k {} \;

Nginx Static Serving

location ~* \.(js|css|html)$ {
    gzip_static on;
    brotli_static on;
    expires 1y;
    add_header Cache-Control "public, immutable";
}

Performance Benchmarking

Compression Testing Script

#!/bin/bash
# Test compression ratios and speeds

FILE="test.html"
echo "Testing compression for $FILE"
echo "Original size: $(wc -c < $FILE) bytes"

# Test gzip
time gzip -c -6 $FILE > $FILE.gz
echo "Gzip size: $(wc -c < $FILE.gz) bytes"

# Test brotli
time brotli -c -4 $FILE > $FILE.br
echo "Brotli size: $(wc -c < $FILE.br) bytes"

# Test zstd
time zstd -c -3 $FILE > $FILE.zst
echo "Zstd size: $(wc -c < $FILE.zst) bytes"

CDN and Edge Optimization

CDN Best Practices

  • Edge compression: Enable at CDN level
  • Origin optimization: Pre-compress when possible
  • Vary header: Ensure proper caching with Vary: Accept-Encoding
  • Compression bypass: Configure for already-compressed content

Mobile Performance

Mobile Optimization Tips

  • Lower compression levels for faster decompression
  • Consider network speed vs CPU trade-offs
  • Test on real devices, not just emulators
  • Monitor battery impact of decompression

Monitoring and Analytics

Performance Monitoring

// Track compression savings
if (performance && performance.getEntriesByType) {
    performance.getEntriesByType('resource').forEach(resource => {
        if (resource.encodedBodySize && resource.decodedBodySize) {
            const saved = resource.decodedBodySize - resource.encodedBodySize;
            const ratio = (saved / resource.decodedBodySize * 100).toFixed(2);
            console.log(`${resource.name}: ${ratio}% compressed`);
        }
    });
}

A worked example: where the wins come from

Take a typical marketing landing page: 90 KB of HTML, 180 KB of CSS, 220 KB of JavaScript, and 450 KB of images and fonts. The text-shaped portion is 490 KB; the binary portion is 450 KB. Apply gzip at level 6 to the text and the bytes-on-the-wire for that part drop to roughly 130 KB. Apply Brotli at level 4 instead and you are closer to 110 KB. Apply Brotli at level 11 to a precompressed bundle of static assets and you can reach about 95 KB. The 450 KB of media does not move because it is already compressed end-to-end.

The interesting question is not "how much smaller did the text get" — the difference between 130 and 95 is 35 KB. The interesting question is what happens to time-to-interactive on a slow 3G connection where effective throughput sits around 400 Kbps. At that link rate, those 35 KB are about 700 ms of round-trip-amortised transfer. That is the real win, and it is why the recommendation to ship Brotli for static text is not academic: it directly moves the metric users feel.

The catch is that the win disappears if you ask Brotli to do level 11 work on every dynamic response. Brotli at the highest level can take more than a second of CPU per megabyte of input on a single core. That is fine for a static asset compressed once at build time. It is catastrophic for a hot path serving JSON to thousands of API requests per second.

Decision criteria for picking a level

Pick by where the response sits in your delivery pipeline:

  • Static, hashed, immutable assets — precompress once at build time, ship at the highest practical level (Brotli 11, Zstandard 19). The compression cost is paid once; the bytes saved are amortised across every request for the lifetime of the cache key.
  • Static but mutable assets (e.g. HTML pages whose URL does not change) — precompress at a moderately high level (Brotli 9, gzip 9), keep the source on disk so the server can re-emit on revalidation, and let an edge cache absorb the request volume.
  • Dynamic responses with shared content (lists, search results, infrequently changing HTML) — on-the-fly Brotli at level 4–5 or gzip at level 6 strikes the typical balance between CPU and bytes.
  • Hot dynamic API responses (per-user JSON, high RPS) — gzip 4 or Zstandard 3 is usually the sweet spot. Above this, you are paying more in latency for compression than you save in transfer time.

Sizing CPU for on-the-fly compression

A useful back-of-envelope calculation: estimate the total bytes of compressible response your origin emits per second under peak load, multiply by the per-byte CPU cost of your chosen algorithm and level, and compare against the spare CPU on your application servers. Per-byte costs vary by hardware, but as a rough guide for current x86 server cores:

  • gzip level 6: roughly 50–100 MB/s per core.
  • Brotli level 4: roughly 50–80 MB/s per core.
  • Brotli level 11: 1–5 MB/s per core (build-time only).
  • Zstandard level 3: 200–400 MB/s per core.

If a fleet emits 30 MB/s of dynamic HTML, gzip 6 needs roughly half a core. Brotli 11 on the same workload would saturate six to thirty cores — which is why "just turn Brotli to 11" is the wrong answer for dynamic responses. Run a load test on a representative endpoint before committing to a level in production.

Cache hit rate is part of the equation

An on-the-fly Brotli level 9 response that misses the edge cache 95% of the time costs you 95% of the CPU it would cost without caching. The same response cached for one minute on the edge can drop the origin's compression load to under 5% even at the same per-request level. Pre-compression and edge caching are usually the right pair: precompress what you can, cache what you must compress on the fly.

This also matters for the Vary: Accept-Encoding header. Without it, an upstream cache may serve a Brotli body to a gzip-only client. With it, the cache splits its working set across encodings — so a site that advertises gzip + Brotli + Zstandard effectively has three cache entries per URL. Plan storage accordingly, and read the dedicated guide for the cache-key normalisation that prevents it from blowing up in practice.

Mobile is where the savings are biggest

Compression matters most where bandwidth is scarce. On a fast home fibre connection, the difference between gzip and Brotli for a 50 KB stylesheet is invisible — both arrive in well under a frame. On a real-world cellular connection with 200 ms RTT and variable throughput, that difference can move metrics like Largest Contentful Paint by hundreds of milliseconds. Decompression CPU on a modern phone is essentially free at any normal compression level; the trade-off is entirely about bytes saved versus origin CPU spent.

For sites with significant mobile traffic, prioritise Brotli for text resources, keep gzip as the fallback, and compress per-request rather than skipping compression for "small" responses — the per-byte overhead of gzip on a 4 KB JSON payload is not worth optimising away on the server when those 800 bytes saved are felt directly by a user on a slow link.

Advanced Techniques

Shared Dictionary Compression

  • SDCH (deprecated)
  • Shared Brotli dictionaries
  • Custom dictionary creation
  • Delta encoding for updates

Streaming Compression

  • Chunked transfer encoding
  • Server-sent events compression
  • WebSocket compression
  • HTTP/2 header compression

Common Pitfalls

Avoid These Mistakes

  1. Double compression: Don't compress already-compressed files
  2. CPU overload: Monitor server CPU with high compression
  3. Missing Vary header: Can cause caching issues
  4. Ignoring small files: Compression overhead may increase size
  5. Wrong MIME types: Ensure correct content-type headers