Speed up your web app (PWA) like a pro: a TypeScript developer’s 4-step routine
A TypeScript dev’s 4-step PWA routine: profile, split bundles, lazy-load smartly, and tune service workers + telemetry to make apps feel like new.
Make your PWA feel like new: a TypeScript developer’s 4-step routine
Is your Progressive Web App sluggish on first load? Do your users complain about long startup times, janky navigation, or stale caches after deploys? If you ship TypeScript React apps at scale, you already know the symptoms — but not always the fastest remedies.
Inspired by a pragmatic 4-step Android tune-up, this guide translates that routine into a web-first workflow you can run on any TypeScript PWA: profile, split, lazy-load, tune service workers & telemetry. Each step is practical, measurable, and includes code you can paste into your repo.
Why this matters in 2026
Browsers and frameworks have improved dramatically through late 2024–2026: native module preloading is ubiquitous, React's concurrent features are mainstream, and tooling (esbuild, SWC, Turbopack) makes fast dev builds trivial. But user expectations rose in parallel — and real-world performance still depends on how you architect bundles, lazy-load behavior, and observe your app in production. This routine gives you a repeatable playbook to make a PWA feel "like new" again.
Overview: the 4-step routine
- Profile where time is spent (startup, hydration, long tasks, network)
- Bundle splitting to reduce initial bytes and JavaScript parse/compile
- Lazy loading UI, routes, heavy logic, and assets
- Service worker tuning & telemetry to manage cache freshness and measure real users
Step 1 — Profile like a pro (and automate it)
Profiling is the diagnostic step. Skip it and you’ll guess at optimizations. Use both lab tools and real-user telemetry.
Local and lab profiling
- Chrome DevTools Performance panel: measure Scripting, Rendering, Painting, and Long Tasks.
- Lighthouse 10+ (2025/2026): run in CI with throttling profiles that match your target devices.
- Profile bundle parse/compile with the Coverage panel and the new JS Profiler to see hot code paths.
Actionables:
- Open DevTools > Performance. Record a cold navigation and analyze the Main thread waterfall. Look for long tasks (> 50ms) and heavy scripting spikes.
- Use Lighthouse CI to track labs metrics across PRs. Configure budgets so PRs fail if initial JS exceeds target bytes. For automation and infra-as-code around these checks, teams often pair Lighthouse runs with IaC templates for automated verification so CI reproducibly runs the same checks.
Real User Monitoring (RUM)
Lab results help, but real-world devices tell the truth. In 2026, integrate a lightweight RUM pipeline (OpenTelemetry RUM + your backend or services like Sentry/Datadog/Vercel RUM) to capture Largest Contentful Paint (LCP), Interaction to Next Paint (INP), TTFB and resource timing. If you run edge functions or serverless workers, the choice of runtime (Cloudflare Workers vs AWS Lambda) and the free-tier tradeoffs matter for EU-sensitive or latency-critical telemetry collection; see comparisons of free-tier edge runtimes for guidance (Free-tier face-off: Cloudflare Workers vs AWS Lambda).
Minimal TypeScript RUM starter using the Performance API and send-beacon:
/* rum.ts */
export function captureVitals() {
if (!('performance' in window) || !navigator.sendBeacon) return;
const send = (payload: any) => {
navigator.sendBeacon('/__rum', JSON.stringify(payload));
};
const obs = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.entryType === 'paint' || entry.entryType === 'largest-contentful-paint') {
send({ type: entry.entryType, name: entry.name, start: entry.startTime });
}
}
});
obs.observe({ type: 'largest-contentful-paint', buffered: true });
}
Hook captureVitals() in your TypeScript app root. In production, sample users (1–5%) to avoid costs but preserve signal.
Step 2 — Bundle splitting: put weight on demand
After profiling you'll know which modules land in the initial bundle. The rule is simple: keep the initial JS small, move heavy or rarely-used code into separately fetched chunks.
Strategies that work
- Route-based splitting — the lowest friction for SPAs.
- Vendor splitting — separate large dependencies like charting libs or maps.
- Component-level splits — lazy-load heavy components used in modals or admin flows.
- Critical CSS / inline small runtime — inline a minimal app-shell and CSS to render above-the-fold faster.
TypeScript + React example (Vite / esbuild friendly)
Use dynamic imports with React.lazy or a route-based loader. Vite and modern bundlers create separate chunks automatically for dynamic imports.
// AppRouter.tsx
import React, { Suspense } from 'react';
import { createBrowserRouter, RouterProvider } from 'react-router-dom';
const Home = React.lazy(() => import('./routes/Home'));
const Dashboard = React.lazy(() => import('./routes/Dashboard'));
const router = createBrowserRouter([
{ path: '/', element: },
{ path: '/dashboard', element: },
]);
export default function AppRouter() {
return (
Loading… Tip: use import(/* webpackChunkName: "dashboard" */ './routes/Dashboard') comments only if you’re on Webpack — modern tools ignore them or have equivalents (Vite uses file-based chunk naming).
Analyze and enforce splits
- Use bundle analyzers (webpack-bundle-analyzer, rollup-plugin-visualizer, vite-plugin-bundle) to spot large chunks.
- Set CI budgets (e.g., initial JS < 150–250 KB gzipped for mobile first visits) and fail PRs that regress.
- Pin large deps to dynamic imports: import('chart.js') inside a Chart component instead of top-level imports.
Step 3 — Lazy loading: smart, predictable delivery
Lazy loading is more than React.lazy. In 2026 you should use a combination of preloading, priority hints, and React's concurrent patterns to make interactions feel instant without transferring unnecessary bytes.
Patterns and code
- Priority resources: use <link rel="modulepreload" /> or rel=preload for critical chunks you know you’ll need (e.g., next route prefetch).
- Prefetch for likely next actions: import(/* webpackPrefetch: true */ './Next') or use intersection observers to prefetch when a link scrolls into view.
- Use React transitions: start transitions with useTransition() for non-urgent UI updates to keep interactions snappy.
Prefetch-on-hover example (TypeScript):
// prefetchLink.ts
export function prefetchModule(getImport: () => Promise) {
if ('requestIdleCallback' in window) {
(window as any).requestIdleCallback(() => { getImport(); });
} else {
// best-effort
setTimeout(() => getImport(), 2e3);
}
}
// Usage in a Link component
// <a href="/next" onMouseEnter={() => prefetchModule(() => import('./Next'))}>Next</a>
Assets and images
Images and fonts often dominate bytes. Use responsive images (srcset), loading="lazy", and modern formats (AVIF/WebP/AV1). For critical icons, inline small SVGs.
Step 4 — Service workers, cache hygiene, and telemetry
Service workers are the power tools for PWAs — but misconfigured caches are the most common cause of "stale app" complaints. Combine a sensible caching strategy with robust telemetry and deployment hooks so users see the latest code without losing offline reliability. If you rely on edge runtimes to speed TTFB, compare the tradeoffs of edge offerings (Cloudflare Workers vs Lambda) for your telemetry and update strategies (Cloudflare Workers vs AWS Lambda).
Service worker fundamentals (TypeScript + Workbox)
Use an app-shell strategy: precache the minimal shell, runtime cache for API responses and large assets, and control updates so users get new versions at appropriate times.
// sw.ts (Workbox-like pseudo-code; compile with workbox-build or Vite plugin)
import { precacheAndRoute, registerRoute } from 'workbox-precaching';
import { StaleWhileRevalidate, CacheFirst } from 'workbox-strategies';
// @ts-ignore: self.__WB_MANIFEST injected by build step
precacheAndRoute(self.__WB_MANIFEST || []);
// Static assets: cache-first with max entries
registerRoute(/\.(?:png|jpg|jpeg|webp|avif|svg)$/i, new CacheFirst({
cacheName: 'images-cache-v1',
plugins: [
{ maxEntries: 100, maxAgeSeconds: 60 * 60 * 24 * 30 }
]
}));
// API: stale-while-revalidate to keep UI fast
registerRoute(/\/api\//, new StaleWhileRevalidate({ cacheName: 'api-cache-v1' }));
// Update lifecycle
self.addEventListener('message', (event) => {
if (event.data === 'SKIP_WAITING') {
self.skipWaiting();
}
});
Important: call skipWaiting only after you’ve notified clients and optionally let users accept the update. Consider an in-app banner: "New version available — reload to apply." For critical fixes, a forced skipWaiting can be used, but use sparingly.
Cache invalidation patterns
- Precache manifests (content-hashed asset names) so the service worker can remove old assets automatically.
- Network-first for APIs that must be fresh, with short cache lifetimes for offline edge cases.
- Stale-while-revalidate for UI data where near-fresh is acceptable and perceived latency matters most.
Telemetry: tie it all together
After service workers, measure the impact. Track service-worker-specific events (install, activate, fetch durations), cache hits/misses, and update acceptance. Use OpenTelemetry, a small custom beacon, or a vendor RUM product. For teams operating at the edge or evaluating edge-driven personalization, reviews of affordable edge bundles and platform designs can be helpful context (Affordable Edge Bundles for Indie Devs), and broader discussions about resilient, cloud-native observability are also relevant (Beyond Serverless: Resilient Cloud‑Native Architectures).
// sw-metrics.ts — post from SW to analytics endpoint
self.addEventListener('fetch', (evt) => {
const start = performance.now();
evt.respondWith(
caches.match(evt.request).then((cached) => {
const took = performance.now() - start;
self.clients.matchAll().then(clients => {
clients.forEach(client => {
client.postMessage({ type: 'SW_FETCH', url: evt.request.url, cache: !!cached, took });
});
});
return cached || fetch(evt.request);
})
);
});
In the client, listen for these postMessages and aggregate to your RUM pipeline. This reveals cache utilization and where your SW helped (or hurt) latency. If you automate telemetry and verification steps in CI, pair your monitoring with reproducible IaC verification templates (IaC templates for automated verification).
Putting it into CI/CD and workflows
Turn the routine into automation so every release keeps your app feeling fresh.
- CI Lighthouse runs on a baseline profile. Fail if LCP/INP degrade beyond thresholds. Consider integrating Lighthouse CI with reproducible infra checks and verification in CI (see IaC templates linked above).
- Bundle analyzer step — fail PRs that introduce large initial chunks.
- Deploy: bake the precache manifest and ensure hashed asset filenames. Post-deploy, trigger a push notification to clients (via service worker messages) to let them pick up updates safely.
- Monitor RUM metrics post-deploy. Roll back if key metrics regressed beyond your SLOs. Teams that operate across many microfrontends often coordinate releases with tooling and agent workflows; when adopting microfrontends or module federation, consider developer-toolchain risks and when to gate automation (Autonomous agents in the developer toolchain).
Sample GitHub Action snippets
# .github/workflows/perf.yml (snippets)
- name: Run Lighthouse CI
uses: treosh/lighthouse-ci-action@v11
with:
urls: 'https://staging.example.com'
- name: Run bundle analyzer
run: pnpm analyze:bundle
Common pitfalls and troubleshooting
These are the real issues teams hit when following this routine.
- Over-lazy-loading: splitting every tiny module increases roundtrips and CPU overhead on low-end devices. Aim for a balance—coalesce small modules.
- Hidden parser costs: even small chunks cost parse time. Use source-map-explorer to find large, nested dependency trees (date libraries, lodash subsets).
- Stale service worker: users see old UI after deploy. Implement a clear update UX and consider a short max-age for the HTML entry document so the SW can detect a new version quickly.
- Telemetry blind spots: sampling too little or missing key events (navigation vs. hydration) obscures regressions. Capture at least LCP, INP, TTFB, and client-side errors. If you need to correlate client RUM with business signals in near-real-time (e.g., pricing or inventory feeds), build monitoring workflows that mimic price/alert systems used in commerce analytics (monitoring & alert workflows).
Advanced tactics for 2026
For teams at scale or with strict SLAs, add these techniques.
- Edge-driven personalization: render a minimal shell at the edge (e.g., Edge bundles for indie devs / Edge Functions) to reduce TTFB while delivering a smaller client bundle.
- Module Federation / Microfrontends: share runtime dependencies across federated remotes to reduce duplicate code in multi-team apps. Use with hashed dependency manifests to avoid version drift; be mindful of developer-toolchain automation and when to gate autonomous agents (developer toolchain agent guidance).
- Continuous profiling: capture production CPU profiles (sampled) to detect hot code paths and long tasks automatically.
- Client-side feature flags: gate heavy features to a subset of users during rollout to measure impact before full release.
Actionable checklist (do this now)
- Run DevTools Performance and a Lighthouse report on a representative device. Note top 3 long tasks.
- Identify the biggest bundles with a bundle analyzer. Move one large dependency to a dynamic import.
- Implement route-level lazy loading for the heaviest route; add modulepreload for its immediate neighbor route.
- Audit your service worker: ensure HTML is network-first with a short TTL and assets are content-hashed and precached. Add an "update available" UX.
- Ship a lightweight RUM event that captures LCP and INP for 2% of users. Observe changes after deploys for one week.
"Treat performance like a product feature: measure, prioritize, ship, and observe."
Key takeaways
- Profile first — don’t guess. Lab + RUM = truth.
- Split smart — large initial bundles are the common culprit.
- Lazy load predictively — prefetch what users will likely need next.
- Tune your SW & telemetry — cache hygiene and observable metrics ensure fast, fresh experiences.
Next steps & call to action
Ready to make your PWA feel like new? Start by adding the RUM snippet and running a Lighthouse CI job. If you maintain a TypeScript monorepo or use Turbopack/Vite, try splitting a single heavy route this sprint and measure the difference in LCP and INP.
Share your results or ask for a short review: paste a bundle analyzer snapshot or a Lighthouse report into your team chat and use the checklist above. If you want hands-on help, clone a sample TypeScript PWA template (link below) and run the 4-step routine in a staging environment.
Ship faster, measure continuously, and keep your users delighted — your PWA can feel like new again with a few disciplined steps.
Related Reading
- Free-tier face-off: Cloudflare Workers vs AWS Lambda for EU-sensitive micro-apps
- Beyond Serverless: Designing Resilient Cloud‑Native Architectures for 2026
- Field Review: Affordable Edge Bundles for Indie Devs (2026)
- IaC templates for automated software verification: Terraform/CloudFormation patterns
- How Creators Can Learn from the Filoni Star Wars Shake-Up: Protecting Your IP and Audience Trust
- Case Study: How a Production Company Grew to 250k Subscribers — Applying Those Tactics to Music Fan Clubs
- Supporting Student‑Parents in 2026: Hybrid Scholarship Services, Microgrants, and Family‑Centered Design
- Inside a Paper-Mâché Workshop: How Kashmiri Lamps Are Painted, Lacquered and Brought to Life
- How to Turn Discounted TCG Boxes into Social Media Revenue: 7 Monetization Formats
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Client-Side NLP with TypeScript and WASM: Practical Patterns
Build a Local LLM-Powered Browser Feature with TypeScript (no server required)
A TypeScript dev’s guide to building low-footprint apps for older Android devices
Composable micro services in TypeScript for micro apps: patterns and pitfalls
Protecting prompt pipelines: security checklist for TypeScript apps using LLMs
From Our Network
Trending stories across our publication group