Replace LocalStack with Kumo: Fast, lightweight AWS emulation for TypeScript integration tests
testingawsci

Replace LocalStack with Kumo: Fast, lightweight AWS emulation for TypeScript integration tests

AAlex Morgan
2026-05-03
21 min read

A practical guide to replacing LocalStack with Kumo for fast TypeScript AWS integration tests using Docker, CI, persistence, and S3/DynamoDB.

If your TypeScript test suite uses the AWS SDK v3, you already know the tradeoff: mocking every call is fast but brittle, while hitting real AWS is accurate but slow and expensive. Kumo is the kind of tool that sits in the middle and makes integration tests practical again. It is a lightweight AWS service emulator written in Go, designed for CI/CD testing and local development, with optional data persistence and Docker support. For teams looking for a LocalStack alternative that starts quickly and stays simple, Kumo is worth serious attention.

In this guide, we will walk through a real TypeScript setup using the AWS SDK v3, Docker, and CI pipelines. We will also cover when to enable persistence, how to structure test data, what Kumo does well, and where it differs from heavier emulators like LocalStack. If you are thinking more broadly about developer workflow choices, the same evaluation mindset applies to other infrastructure decisions too, from hybrid workflows for local, cloud, and edge tools to choosing the right level of automation in a test environment.

Why teams replace LocalStack in the first place

Local speed matters more than feature breadth for most test suites

LocalStack is powerful, but many teams do not need a full AWS simulation for every integration test. In practice, most test suites depend on a small subset of services such as S3, DynamoDB, SQS, and maybe EventBridge or SNS. When your suite runs hundreds of tests in parallel, startup time, memory consumption, and container complexity become the bottleneck. Kumo’s value proposition is straightforward: a lightweight AWS emulator with fast startup and minimal resource usage, which makes it better suited to frequent local runs and CI jobs.

This is similar to the way engineers often adopt the least complex tool that satisfies the use case. You see that same pattern in product and infrastructure decisions across software: teams avoid overbuying “AI” they do not need, just as they avoid an oversized emulator when a slim one will do. If you want a broader lens on choosing tools that actually earn their keep, see A Creator’s Guide to Buying Less AI.

CI reliability is often the hidden reason to switch

Many LocalStack frustrations show up only after the test suite is wired into CI/CD. Containers can take longer to boot under constrained runners, ports can collide, and tests can become flaky when the emulator is still warming up. Kumo is marketed as having no authentication required, which is a practical advantage for ephemeral CI environments where you want deterministic test startup and minimal config. That no-auth design also reduces the number of secrets and permissions you need to manage in the pipeline.

For teams that care about secure automation, it is helpful to compare this mindset with more general deployment discipline, such as the ideas in our trust-first deployment checklist for regulated industries. Even though integration-test infrastructure is not production infrastructure, the same principles apply: keep credentials out of the loop unless they are absolutely necessary.

Kumo is a fit for pragmatic, service-level integration tests

Kumo is especially useful when your tests need to verify how your application behaves against AWS-like APIs, but not every edge of AWS behavior. The strongest fit is “service-level integration”: your app writes an object to S3, stores metadata in DynamoDB, publishes a message to SQS, or triggers an event. That is where emulation gives you confidence without the cost of real cloud dependencies. If your architecture leans heavily on simple service interactions, Kumo can keep tests local, fast, and reproducible.

For a broader perspective on balancing fidelity and cost in architecture choices, compare the idea with our guide to low-cost, high-impact cloud architectures. The common theme is the same: use enough infrastructure to prove the workflow, not so much that your test environment becomes its own project.

What Kumo supports and what that means for TypeScript teams

S3 and DynamoDB are the practical center of gravity

Kumo’s published service list is broad, but for TypeScript integration tests the most valuable services are usually S3 and DynamoDB. Those two services cover a huge number of real-world application patterns: uploads, document processing, metadata storage, job tracking, idempotency keys, and event-driven workflows. If your app uses AWS SDK v3 clients for these services, you can usually adapt your code to point to Kumo with a small configuration change.

That matters because AWS SDK v3 is modular and dependency-friendly. You can instantiate only the clients you need, inject the endpoint at runtime, and keep your test code close to production code. That pattern reduces the gap between local tests and real deployments. It also makes it easier to swap in Kumo for integration tests while keeping the same client abstractions you use in production.

The emulator’s breadth can help, but depth is still a consideration

Kumo supports many AWS services, including SQS, SNS, Lambda, EventBridge, API Gateway, CloudWatch, IAM, and more. That makes it attractive for teams with event-driven systems or broader AWS coverage needs. Still, “supported” does not always mean “identical to AWS,” and no emulator should be treated as a perfect substitute. In practice, the more exotic the AWS feature, the more likely you are to encounter gaps in edge behavior, response shape, timing, or validation rules.

That is why a good integration-test strategy starts with the core workflows your product depends on. If you are validating uploads, queueing, and state transitions, Kumo can add a lot of confidence. If you are testing precise IAM policy evaluation, advanced Step Functions behavior, or obscure S3 edge cases, you should expect to validate those separately against AWS itself or via contract tests.

Persistence changes how you think about stateful tests

Kumo offers optional data persistence via KUMO_DATA_DIR, which is a useful feature when you want state to survive restarts. This is especially handy for local development and debugging, where you may want to inspect test data after a failure. Persistence is not always desirable, though. In most automated tests, persistent state can make suites order-dependent, hide setup mistakes, and create cleanup debt. For that reason, persistence should be treated as an opt-in feature for specific workflows, not the default for all tests.

When teams compare persistence strategies, they often run into the same design question found in other tooling areas: do you want a disposable environment or a long-lived workspace? The answer depends on whether you are optimizing for isolation or debugging. A similar tension appears in real-time notification systems, where speed and reliability must be balanced against the complexity of state handling.

DimensionKumoLocalStackPractical takeaway
Startup timeVery fastHeavier, often slowerKumo is better for frequent CI runs and local iteration.
Resource usageLightweightMore resource-intensiveLower RAM and CPU usage helps on shared runners.
Setup complexitySimple binary or DockerBroader configuration surfaceKumo is easier to introduce into small teams.
Service breadthBroad, with core AWS coverageVery broad and matureLocalStack may still win for unusual service combinations.
PersistenceOptional via data dirAvailable in some workflowsUse persistence only when debugging or building fixture workflows.
CI suitabilityStrongStrong but heavierKumo is a good fit when runner resources are limited.

Setting up Kumo in a TypeScript project

Install and run Kumo with Docker

The simplest way to get started is to run Kumo in Docker and point your AWS SDK v3 clients at its endpoint. This is usually the cleanest option for cross-platform teams because it avoids installing a local binary on every developer machine. In CI, a container-based approach is also easy to standardize because the service starts the same way every time.

Example Docker run pattern:

docker run --rm -p 3000:3000 sivchari/kumo:latest

Once it is running, configure your app or test harness to override the endpoint for S3, DynamoDB, or any other service you are testing. If you work in environments with many moving parts, the container pattern mirrors other predictable deployment choices like those outlined in trust-first deployment checklist for regulated industries and the operational thinking behind hybrid workflows for cloud, edge, or local tools.

Wire AWS SDK v3 clients to the emulator

In TypeScript, your clients should be created in a way that accepts an endpoint override. That keeps production and test logic aligned while letting the test suite point to Kumo. The common pattern is to read environment variables in test mode and pass them into the client constructor. This works especially well with dependency injection, because you can centralize the configuration and avoid scattering emulator-specific logic throughout your codebase.

import { S3Client } from "@aws-sdk/client-s3";
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";

const endpoint = process.env.AWS_ENDPOINT_URL;
const region = process.env.AWS_REGION ?? "us-east-1";

export const s3 = new S3Client({
  region,
  endpoint,
  forcePathStyle: true,
});

export const dynamo = new DynamoDBClient({
  region,
  endpoint,
});

For S3, forcePathStyle: true is often helpful in emulated environments because it avoids virtual-host style assumptions that complicate local testing. You should keep this configuration in a test-only path or environment-specific factory rather than hardcoding it universally. That way, your production configuration stays clean while your tests remain stable.

Build a thin test harness around common setup and teardown

The biggest mistake teams make with emulator-based tests is repeating setup in every file. Instead, create reusable helpers that provision a bucket, create a DynamoDB table, seed a few objects or items, and clean up afterward. Your TypeScript harness can expose functions like createTestBucket(), putFixtureObject(), and seedUserRecord(). The goal is to make every test concise, explicit, and easy to read while still preserving the AWS interaction surface.

Use test helpers the way you would structure other stateful developer workflows: clear boundaries, reusable primitives, and a predictable cleanup path. If you are also standardizing broader collaboration patterns, there is a useful analogy in our in-house talent playbook, where the value comes from repeatable systems rather than one-off heroics.

Docker and CI/CD setups that actually work

Local development with Docker Compose

For local development, Docker Compose is often the easiest way to run Kumo alongside your application. You can create one service for Kumo and one for your TypeScript app or test runner, then use service DNS names instead of localhost if your tests are running inside containers. This approach is stable, repeatable, and easy to document for new contributors. It also helps eliminate the “works on my machine” problem that usually appears when someone has a different local emulator version or port mapping.

A good Compose setup also gives you a place to mount a persistent data directory when you want to inspect state between restarts. That is especially useful for debugging S3 object flows or DynamoDB mutations. When your team needs to compare stateful and stateless runs, this local pattern becomes a quick diagnostic tool instead of a guess-and-check exercise.

CI pipelines should default to disposable environments

In CI, the default should be a fresh Kumo instance with no persistence. Integration tests are most trustworthy when they create exactly the state they need and then tear it down or discard the container. This prevents test pollution between jobs and keeps reruns deterministic. If you are running parallel test shards, isolation matters even more because shared state can create false positives or nondeterministic failures.

It is often useful to think of CI resources the same way teams think about operational risk in other domains: a clean baseline reduces hidden coupling. For a broader risk-management analogy, the mindset is similar to maintaining an IT project risk register or building safe orchestration patterns for multi-step workflows. The more reproducible the environment, the easier it is to trust failures.

Sample GitHub Actions workflow

A typical CI setup launches Kumo as a service container, waits briefly for readiness, then runs your TypeScript tests. Use environment variables for the endpoint, region, and any test-specific bucket or table names. Avoid hardcoding the emulator URL deep in application code. The application should not know whether it is speaking to AWS or to an emulator; that distinction belongs in infrastructure configuration.

name: test
on: [push, pull_request]
jobs:
  integration:
    runs-on: ubuntu-latest
    services:
      kumo:
        image: sivchari/kumo:latest
        ports:
          - 3000:3000
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 22
      - run: npm ci
      - run: npm test
        env:
          AWS_REGION: us-east-1
          AWS_ENDPOINT_URL: http://localhost:3000

In some workflows, you may want to add a small health check or readiness wait before starting the suite. That extra step prevents race conditions where the tests begin before the emulator is listening. If your pipelines already use service containers widely, this pattern will feel familiar and low risk.

When to enable persistence and when to avoid it

Enable persistence for debugging and local exploratory work

Kumo persistence is most valuable when you are investigating a bug and want to see the exact post-test state after a failure. It also helps when you are developing new integration fixtures and do not want to rebuild all seed data from scratch every time. For example, if a workflow touches several buckets and tables, persistence can help you inspect intermediate results and understand where the application deviated from expectations. In that sense, persistence is a productivity feature for humans, not a default setting for automated pipelines.

A practical rule is to enable persistence only for local development or a dedicated debugging profile. When you do, set KUMO_DATA_DIR to a clearly named folder and document what lives there. That keeps persistent state visible and intentional instead of accidental.

Avoid persistence for normal integration tests

Most integration suites should remain stateless across runs. If test state survives restarts, it becomes easy to forget to clean up or to accidentally rely on data from a previous run. That can hide bugs in setup logic and make failures hard to reproduce. Stateless tests are more boring, but they are dramatically better for correctness and parallelism.

This is where disciplined fixture design becomes important. Each test should create its own bucket, table, or object prefix using a unique identifier, run assertions, and remove resources when done. If the emulator is being used as part of a larger product workflow, you might borrow the same design discipline that publishers use in fast-turn systems, such as the approach described in fast-turn editorial briefings: structure the workflow so every run has a clear beginning and end.

Use persistence sparingly in CI artifacts and diagnostics

There are exceptions. If a CI failure is expensive to reproduce, you may choose to preserve Kumo data as an artifact for a failed job. That can be useful in debugging rare data-shape issues or chasing down race conditions. But this should be a targeted diagnostic practice, not the everyday operating mode. In most teams, the default CI posture should be disposable, while debugging workflows can opt into persistence temporarily.

Pro tip: Treat persistence like a debugger breakpoint, not a normal runtime mode. If you find yourself depending on persistent state for tests to pass, your test isolation is probably too weak.

Test data strategies for S3 and DynamoDB

Use namespaced prefixes and per-test identifiers

The easiest way to keep emulator-based tests reliable is to namespace everything. For S3, use a unique prefix per test run, suite, or worker. For DynamoDB, include a test-run identifier in your partition keys or use test-specific table names if your setup speed allows it. This reduces collisions, makes cleanup easier, and lets you inspect a test run without confusing it with another one.

Think of test data like a short-lived production tenant. It should be self-contained, traceable, and easy to delete. That design is similar in spirit to the thinking behind structured sponsored series for niche B2B companies, where the unit of value is the package or container, not an ad hoc pile of assets.

Prefer repeatable fixtures over ad hoc setup

Well-designed fixtures give you predictable input and output. Seed only the data each test needs, and keep fixture definitions close to the test they serve. For S3, that might mean uploading a JSON file, a CSV payload, or a small binary object before invoking your application. For DynamoDB, it might mean creating a user record, a processing job record, or a queue-state row with the exact attributes your code will read.

Repeatable fixtures also make debugging much easier because you can reason about failures without guessing what the environment looked like. If a test depends on a specific shape of data, encode that shape in the fixture. Avoid “mystery state” that only exists because another test happened to set it up first.

Design cleanup as part of the test, not as a separate chore

Cleanup should be automatic, even when the test fails. If your tests create buckets or tables dynamically, make sure teardown happens in afterEach or finally blocks. This matters more with emulators than with mocks because the state is real enough to accumulate and interfere with later runs. A good habit is to make each test responsible for its own namespace and resources so cleanup becomes straightforward and reliable.

There is a broader engineering lesson here: systems are more trustworthy when reset behavior is built in. That idea shows up in other technology domains too, including the way teams design resilient distributed systems and error handling, much like the lessons in error accumulation in distributed systems.

Pitfalls and limitations compared with LocalStack

Do not assume AWS parity on edge cases

The most important pitfall is assuming that because a request succeeds in Kumo, it will behave exactly the same way in AWS. Emulators are incredibly valuable, but they are still approximations. Subtle differences may appear in validation errors, eventual consistency behavior, event timing, metadata handling, or unsupported corner cases. If your application depends on a narrow AWS behavior, validate that path against AWS directly before calling the test suite complete.

That is not a criticism of Kumo; it is the reality of all emulators. The right mental model is “high-confidence development and integration checks,” not “absolute emulation of every AWS nuance.” If you need deeper production confidence, pair emulator tests with a smaller number of real-AWS smoke tests.

Be cautious with advanced workflows and service coupling

Complex AWS workflows often span multiple services, and emulators can differ in how they coordinate those transitions. For example, a workflow that involves S3, SQS, Lambda, and EventBridge is more likely to expose interoperability gaps than a simple single-service CRUD test. This is where the breadth of Kumo is useful, but also where you should expect to validate carefully. The more orchestration you layer on top of AWS primitives, the more you should treat emulator behavior as a first-pass approximation.

If you are building event-driven systems, it can help to test each boundary separately and then compose them in a smaller number of end-to-end scenarios. That lowers the maintenance burden and makes failures easier to localize. It also prevents your test suite from becoming a second distributed system that is harder to reason about than the application itself.

LocalStack may still be better in certain mature AWS-heavy teams

There are cases where LocalStack remains the better choice. If your organization already depends on very broad AWS coverage, has existing scripts and CI patterns around LocalStack, or needs specific fidelity for services Kumo does not yet emulate deeply, switching may not be worth the migration cost. Teams with many developers and large existing test harnesses should evaluate the time saved by Kumo against the cost of rewriting service assumptions and Docker workflows.

That said, many teams do not need to replace everything at once. It is often sensible to pilot Kumo on the S3 and DynamoDB test paths first, then expand if the results are good. This incremental adoption approach mirrors practical upgrade strategies elsewhere, such as the guidance in incremental upgrade planning, where teams prioritize the highest-value changes first.

A practical migration plan from LocalStack to Kumo

Start with one service and one test suite

Do not attempt a full rip-and-replace on day one. Pick the most common integration path, such as S3 uploads or DynamoDB reads and writes, and point that suite to Kumo. Keep the test shape the same while changing only the emulator endpoint and service startup process. This gives you a clean A/B comparison for runtime, stability, and developer experience.

Measure three things: startup time, test duration, and failure rate. If Kumo improves all three, you have a strong case for expanding its use. If it only improves one metric, you may want to keep it as a local developer tool while leaving CI on the current stack until confidence grows.

Document the new workflow for the team

Migration success depends on documentation, not just code. Developers need to know how to start Kumo, how to reset state, how to enable persistence for debugging, and which tests are expected to run against it. Without a clear guide, people will drift back to old habits or create local one-off setups that are hard to support. Put the workflow in your repository README, your CI config comments, or your internal engineering handbook.

For teams that manage content, platform, or developer experience at scale, this is the same reason clear internal playbooks matter. If you are building more durable team systems, our guide on finding in-house talent is a useful reminder that process quality compounds over time.

Keep a small set of real-AWS validation tests

No emulator should eliminate real cloud validation entirely. Instead, keep a small smoke-test layer that runs against AWS with carefully controlled credentials and a limited dataset. That layer should confirm the production account, network, and permission model are still valid. Kumo handles the bulk of your routine integration testing, while AWS smoke tests catch the service-specific differences that emulation cannot perfectly mirror.

This layered model is the same kind of pragmatic balance found in many modern engineering workflows: local speed for iteration, cloud truth for final verification, and clear boundaries between the two. That balance helps teams ship faster without confusing convenience with certainty.

FAQ

Is Kumo a full replacement for LocalStack?

Not always. Kumo is a strong choice when you need a lightweight AWS emulator for core workflows, especially S3 and DynamoDB in TypeScript integration tests. If your stack depends on unusual AWS services, intricate edge behavior, or very mature LocalStack-specific tooling, you should validate those needs before migrating fully.

How do I connect AWS SDK v3 clients to Kumo?

Pass the emulator endpoint into your AWS SDK v3 client constructors, usually through environment variables. For S3, it is also common to set forcePathStyle: true in local tests. Keep the endpoint configuration in a test-only factory so production code does not depend on the emulator.

Should I enable persistence with KUMO_DATA_DIR in CI?

Usually no. CI tests are most reliable when they start from a clean state every time. Enable persistence mainly for local debugging, exploratory work, or special diagnostic workflows where you want to inspect data after a failure.

What test data strategy works best with Kumo?

Use unique prefixes, test-run identifiers, and repeatable fixtures. Keep each test responsible for its own data and clean up after itself. This prevents collisions and makes the suite easier to parallelize.

What are the biggest pitfalls compared with LocalStack?

The biggest pitfall is assuming perfect AWS parity. Emulators can differ in edge cases, orchestration timing, and service interactions. You should always keep a small real-AWS smoke-test layer for the behaviors you care about most.

Can Kumo be run in Docker Compose for local development?

Yes. Docker Compose is a practical way to run Kumo alongside your application and tests. It keeps the environment reproducible, makes onboarding easier, and allows you to add optional persistence for debugging when needed.

Bottom line: use Kumo where speed and simplicity matter most

Kumo is an excellent fit for TypeScript teams that want fast, lightweight AWS emulation without the operational overhead of a heavier stack. It is especially compelling for AWS SDK v3 integration tests that focus on S3, DynamoDB, queues, and event-driven flows. If your current LocalStack setup feels slow, heavy, or overly complex for the tests you actually run, Kumo is worth piloting immediately.

The best migration strategy is incremental: begin with one suite, wire in Docker, keep CI disposable, and use persistence only when it helps you debug. That approach gives you the speed benefits of local emulation while preserving trust in the tests that matter. For further reading on adjacent workflow design patterns, explore our guides on hybrid local/cloud workflows, balancing speed and reliability, and trust-first deployment.

Advertisement
IN BETWEEN SECTIONS
Sponsored Content

Related Topics

#testing#aws#ci
A

Alex Morgan

Senior TypeScript Editor & SEO Strategist

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
BOTTOM
Sponsored Content
2026-05-03T01:13:43.135Z