Implementing real-time traffic and incident reporting in TypeScript: an event-driven approach
realtimearchitecturemapping

Implementing real-time traffic and incident reporting in TypeScript: an event-driven approach

ttypescript
2026-01-28
11 min read
Advertisement

Practical guide to building scalable real-time traffic and incident reporting in TypeScript using WebSockets, SSE, WebRTC and typed event contracts.

Ship reliable, typed real-time traffic updates without the guesswork

If you maintain a large mapping application or a fleet-tracking backend, you know the pain: noisy, untyped event streams, clients that silently break on schema changes, and unpredictable scaling when rush-hour traffic spikes arrive. In 2026, teams expect not just low latency but strong type safety, predictable backpressure handling, and observability across distributed services.

This guide gives a practical, event-driven approach to building real-time traffic and incident reporting in TypeScript. It compares WebSockets, Server-Sent Events (SSE), and WebRTC data channels for common traffic use cases, shows how to design typed event contracts using discriminated unions and protobuf, and explains patterns for scalability, versioning, and resiliency used by production systems today.

The fastest path to a robust system is a hybrid strategy: use a typed WebSocket layer for bidirectional signaling and subscription control, fall back to SSE for simple one-way feeds, and use WebRTC data channels for peer-assisted low-latency updates between nearby clients. Persist and fan out events through a message broker (Kafka, NATS, or Redis Streams) and serialize payloads with protobuf for compact, forward-compatible messages.

Key takeaways:

  • Define a single source of truth for event schemas and generate TypeScript types via codegen (ts-proto or protobuf-ts).
  • Start with a snapshot + delta model to reconcile client state after reconnects.
  • Use brokers and sticky routing to scale WebSocket stateful connections horizontally.
  • Implement versioning and schema evolution via protobuf optional fields and discriminants.
  • Monitor MQ lag and backpressure and emit client-level ACKs where needed.

Why the landscape in 2026 matters

By late 2025 and early 2026, WebTransport and wider QUIC adoption unlocked lower-latency transports. Edge platforms added more robust WebSocket and WebTransport support, and TypeScript-first tooling for protobuf and gRPC-web matured. This means teams can realistically deploy high-throughput, geo-distributed traffic feeds with strong typing across the stack.

The patterns below assume you want future-proof designs: protobuf for compact messages and forward/backwards compatibility, brokers for replay and partitioning, and TypeScript codegen so both server and clients share the same contracts.

Designing typed event contracts

The foundation is a clear, versioned event schema. Use a discriminated union for in-memory TypeScript safety and protobuf for wire format.

Event model

A traffic system needs events like snapshot, delta, incident, and heartbeat. Here is a concise TypeScript discriminated union for the contract used at runtime on the client and server.


// event-types.ts
export type RoadId = string

export interface BaseEvent {
  id: string // event id
  ts: number // epoch ms
}

export type TrafficEvent =
  | { type: 'snapshot'; id: string; ts: number; roads: Array<{ id: RoadId; speed: number; incident?: string }>; seq: number }
  | { type: 'delta'; id: string; ts: number; changes: Array<{ id: RoadId; speed?: number; incidentAdded?: string; incidentCleared?: boolean }>; seq: number }
  | { type: 'incident'; id: string; ts: number; roadId: RoadId; severity: 'low' | 'med' | 'high'; description?: string }
  | { type: 'heartbeat'; id: string; ts: number }

// Generic dispatcher helper
export function handleEvent(e: E) {
  switch (e.type) {
    case 'snapshot':
      // apply snapshot
      break
    case 'delta':
      // apply delta
      break
    case 'incident':
      // show incident
      break
    case 'heartbeat':
      // keepalive
      break
  }
}

Keep message metadata like seq for ordered application, and include event-level ids for idempotency across retries.

Protobuf for wire format

Protobuf provides compact encoding and forward/backwards compatibility when fields are added. Use ts-proto or protobuf-ts to generate TypeScript types directly from .proto files and keep your runtime types unified.


// traffic.proto
syntax = 'proto3';
package traffic;

message RoadState {
  string id = 1;
  double speed = 2;
  string incident = 3; // optional, empty means none
}

message Snapshot {
  string id = 1;
  int64 ts = 2;
  repeated RoadState roads = 3;
  int64 seq = 4;
}

message DeltaChange {
  string id = 1;
  google.protobuf.DoubleValue speed = 2;
  string incidentAdded = 3;
  google.protobuf.BoolValue incidentCleared = 4;
}

message Delta {
  string id = 1;
  int64 ts = 2;
  repeated DeltaChange changes = 3;
  int64 seq = 4;
}

// ... incident, heartbeat

Generate TypeScript types during your CI build to ensure server and client code use the exact same contract. This prevents unexpected runtime errors from schema drift.

Transport choices: WebSockets, SSE, WebRTC, and WebTransport

Each transport has tradeoffs. Choose based on topology, directionality, and latency needs.

WebSockets

Best for bidirectional feeds, subscription control, and when clients must send telemetry or ACKs. In 2026, WebSocket support is ubiquitous across browsers and serverless edge platforms.


// minimal typed WebSocket wrapper for Node using ws
import WebSocket, { WebSocketServer } from 'ws'
import { TrafficEvent } from './event-types'

class TypedWS {
  ws: WebSocket
  constructor(ws: WebSocket) { this.ws = ws }
  sendEvent(e: TrafficEvent) { this.ws.send(JSON.stringify(e)) }
}

const wss = new WebSocketServer({ port: 8080 })
wss.on('connection', ws => {
  const t = new TypedWS(ws)
  ws.on('message', msg => {
    // parse commands from client, e.g. subscribe
  })
  // send heartbeats
})

For production you will replace JSON with protobuf binary buffers to reduce payload size and CPU cost.

Server-Sent Events (SSE)

SSE is a simple fallback for one-way streaming when you don't need client messages. It works well for low-complexity dashboards and is easier to scale across CDN/edge platforms.


// simple Express SSE endpoint
import express from 'express'
const app = express()

app.get('/stream', (req, res) => {
  res.set({ 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache' })
  res.write('retry: 10000\n\n')
  const send = (data: string) => res.write(`data: ${data}\n\n`)
  const timer = setInterval(() => send(JSON.stringify({ type: 'heartbeat', ts: Date.now() })), 20000)
  req.on('close', () => clearInterval(timer))
})

app.listen(3000)

WebRTC data channels

Use WebRTC data channels to enable peer-assisted updates, eg. sharing local sensor or probe data between nearby drivers to reduce server load and latency. WebRTC still needs a signaling layer (often a WebSocket).

In dense urban scenarios, WebRTC can help offload update propagation, but requires careful NAT traversal and privacy rules.

WebTransport and QUIC

As QUIC and WebTransport roll out, expect lower tail latency and better loss recovery for mobile networks. Evaluate it for high-throughput, low-latency backchannels; however, browser support and intermediaries are still evolving in 2026. See notes on edge-sync and low-latency workflows for how to prepare your stack.

Putting it together: architecture patterns

Below is a production-grade flow that balances type safety, scalability, and resiliency.

  1. Ingest layer: edge Tier ingests telemetry from probes (mobile apps, vehicles). Validate and map to canonical protobuf events using a lightweight TypeScript service.
  2. Broker: publish to Kafka/NATS/Redis Streams partitioned by road segment or geographic tile. Brokers provide replay and consumer groups for horizontal scaling.
  3. Processor: stream processors (kafka-streams, Flink, or a Node consumer) compute deltas, apply rate limiting, enrich with map data, and emit compact protobuf messages.
  4. Fanout: a WebSocket farm consumes processed topics and pushes to connected clients. Sticky sessions or a global connection manager (eg. Cloudflare Durable Objects / serverless patterns) ensure ordering and session affinity.
  5. Client: subscribe to tiles or routes, receive snapshots and deltas, reconcile using seq numbers and apply idempotent updates.

Snapshot + delta reconciliation

When a client connects, send a full snapshot with a base seq number. Then stream deltas with monotonically increasing seq. On reconnect, the client requests any missed deltas; if they fall outside retention, server sends a new snapshot. Planning snapshots and deltas with latency budgeting in mind helps keep recovery fast during spikes.

Backpressure and acknowledgements

For critical incidents, require client ACKs for guaranteed delivery. For high-volume regular traffic updates, use unacknowledged best-effort streams and rely on sequence reconciliation. Monitor and plan for backpressure as described in latency and load playbooks like latency budgeting.

TypeScript patterns for safety and extensibility

Use generics and mapped types to build a typed dispatch and request-response system so handlers are impossible to call with wrong payloads.


// typed-rpc.ts
export type RequestMap = {
  'subscribe': { area: string }
  'unsubscribe': { area: string }
  'getSnapshot': { area: string }
}

export type ResponseMap = {
  'subscribed': { area: string }
  'snapshot': { area: string; snapshotSeq: number }
}

export type Req = { op: K; payload: RequestMap[K] }
export type Res = { op: K; payload: ResponseMap[K] }

// ensure handlers implement signatures
type Handler = (payload: RequestMap[K]) => Promise

const handlers: { [K in keyof RequestMap]?: Handler } = {}

function register(op: K, h: Handler) { handlers[op] = h }

// usage checks at compile time
register('subscribe', async p => { console.log(p.area) })

Combine these patterns with protobuf-generated types for wire-level safety and developer ergonomics.

Scaling and operational concerns

Below are pragmatic strategies used by teams operating traffic feeds at scale.

  • Partition by geography: partition topics by tile or road region to confine state and reduce cross-talk.
  • Sticky routing: route clients to the same server for a subscription (or use Durable Objects) to maintain seq ordering without global coordination.
  • Use compact encoding: protobuf or CBOR reduces bandwidth and CPU compared to JSON; also consider delta compression for frequent small changes. Infrastructure cost and tiering guidance can be found in discussions of cost-aware tiering.
  • Retain recent history in a fast cache (Redis or in-memory LRU) for snapshots and missed delta replay.
  • Observe broker lag: use metrics to alert when consumer lag grows, which predicts client staleness; pair this with model/processing observability patterns like operationalized observability.

Versioning and schema evolution

Always design for schema evolution. With protobuf:

  • Add new fields with new field numbers and default values.
  • Favor optional wrappers for fields that may be removed.
  • Maintain the discriminant 'type' field in union cases for easy routing.

On the application side, write defensive handlers that ignore unknown fields and provide fallback behavior.

Security and privacy

Traffic data often touches personal data or location information. Follow these rules:

  • Use TLS everywhere and validate origins on WebSocket handshakes.
  • Minimize PII in telemetry; use hashed identifiers or ephemeral IDs.
  • Rate limit and authenticate clients with short-lived JWTs tied to a subscription scope — see identity guidance in Identity is the Center of Zero Trust.
  • Audit who accessed incident details; keep an access log for compliance.

Testing contracts and resilience

A few practical testing approaches:

  • Contract tests: auto-run protobuf codegen in CI and verify wire-level compatibility between server and client builds.
  • Chaos testing: inject packet loss, reorder, and disconnections to validate snapshot+delta reconciliation.
  • Load tests: simulate rush-hour spikes; measure consumed client throughput and broker lag. See latency budgeting notes for test design ideas.

Monitoring and observability

Instrument these metrics: active connections, per-client event rates, broker consumer lag, event processing latency, and dropped messages. Correlate with sampling traces across ingest → processor → fanout. If you're standardizing an observability playbook for edge-assisted workflows, see edge visual & observability playbooks.

"In production, most outages come from unexpected backpressure and schema drift, not network glitches." — lessons from operators

Example: end-to-end minimal flow

Here's a compact sequence to implement rapidly in TypeScript for a prototype:

  1. Define protobuf schema and generate TypeScript types.
  2. Use a light Node service to ingest telemetry and publish protobuf messages to Redis Streams.
  3. Run a worker that computes deltas and pushes to a WebSocket farm which broadcasts protobuf binaries to subscribed clients.
  4. Client decodes via protobuf-ts and applies events using discriminated union helpers.

This setup is production-adjacent: compact, typed, and horizontally scalable.

Looking forward, expect a few shifts that change implementation details but not core patterns:

  • WebTransport maturing will offer better performance for lossy mobile links; plan your abstraction so transports can be swapped.
  • Edge compute improvements let you do more preprocessing at the edge, reducing central load for regionally-relevant alerts — see edge-sync & low-latency workflows.
  • Better TypeScript protobuf tooling will further shrink the gap between compile-time types and runtime validation; adopt codegen early to keep drift low. If you're weighing build vs buy decisions for tooling, this framework may help.

Actionable checklist to get started this week

  1. Define your event schema with protobuf and run ts-proto codegen in CI.
  2. Implement snapshot + delta in a small Node service and expose a WebSocket endpoint.
  3. Integrate Redis Streams or Kafka for simple replay and partitioning.
  4. Build a typed client using generated TypeScript and add reconnection logic with seq reconciliation.
  5. Load-test with realistic probe patterns and add monitoring for consumer lag.

Closing thoughts

Real-time traffic and incident reporting demands a balance of low latency, strong typing, and operational resilience. With TypeScript and protobuf-based contracts, you can achieve predictable, maintainable systems in 2026 without sacrificing performance. Start small with a snapshot + delta model, and iterate toward edge-assisted distribution and advanced transports like WebTransport when you need the next level of latency.

Ready to build your first typed real-time pipeline? Try generating your TypeScript types from a small traffic.proto and wire up a minimal WebSocket fanout this week. If you want, use the code snippets above as a scaffold and expand with Raspberry Pi clusters or edge nodes for preprocessing.

Call to action

If you liked this guide, download the sample repo that includes protobuf examples, TypeScript codegen, and a minimal WebSocket fanout implementation to kickstart your project. Join the TypeScript real-time community to share patterns and get feedback on your event schema.

Advertisement

Related Topics

#realtime#architecture#mapping
t

typescript

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T02:13:51.963Z