Code splitting strategies
Slap yourself if
You think code splitting is just 'use dynamic import' or believe smaller initial bundles are always better.
Why this exists
Because many apps regress performance after adding code splitting — trading parse time for network waterfalls, cache misses, and user-visible jank.
Mental model
Code splitting is cache and scheduling strategy, not a size reduction trick. You are deciding when, how, and at what cost code becomes executable.
- The module graph is partitioned into multiple chunks.
- Chunk boundaries introduce async loading and execution delays.
- Runtime loaders coordinate fetching, parsing, and execution.
- Poor boundaries amplify network latency and cache fragmentation.
- Over-splitting into many tiny chunks.
- Splitting on routes that users immediately need.
- Ignoring shared dependencies causing duplication.
- Measuring bundle size instead of interaction latency.
Code splitting defers loading and execution by introducing async chunk boundaries, trading upfront cost for delayed availability and cache behavior.
- Equates code splitting with lazy loading only.
- Claims more splits always improve performance.
- Ignores runtime and cache implications.
- Cannot explain chunk duplication.
Deep dive
Requires Pro
Premium deep dives include more internals, more scars.