From ChatGPT prompt to TypeScript micro app: automating boilerplate generation
LLMstoolingautomation

From ChatGPT prompt to TypeScript micro app: automating boilerplate generation

ttypescript
2026-01-23 12:00:00
11 min read
Advertisement

Automate typed TypeScript micro apps with LLMs—schema-first generation, repair loops, CI checks, and PR automation for safe, production-ready code.

Hook: Stop rewriting boilerplate — teach an LLM to ship a TypeScript micro app

You know the pain: you need a small, typed micro app (a component, an API endpoint, tests, and a PR) but boilerplate eats hours. In 2026, with powerful LLMs like ChatGPT, Claude, and Gemini available via APIs and editor integrations, you can automate that repetitive work—but only if you apply reliable patterns, enforce type safety, and bake review and CI checks into the pipeline.

Why this matters in 2026

LLM tooling is now a first-class part of many developer workflows. Big vendors and open-source tools have converged on structured generation, function-calling, and tighter editor plugins. The result: faster micro app creation and lower entry barriers for non-developers building "vibe-coded" micro apps. But speeding up creation also increases the risk of subtle type bugs, insecure dependencies, and fragile tests. This guide shows practical patterns to use LLMs to generate typed TypeScript code, tests, and commit-ready PRs while reducing risk with runtime validation, CI gates, and human-in-the-loop review prompts.

What you’ll get

  • Repeatable prompt templates for generating typed components, APIs, and tests
  • Practical examples: React component + Zod validation + API route + tests
  • CI and PR automation patterns (GitHub Actions + gh CLI)
  • Safety checks, review prompts, and LLM-based code reviewer recipes

Core pattern: Schema-first, typed-first generation

Always start with a schema. Ask the LLM to emit a precise runtime schema (Zod, io-ts, or TypeBox) and TypeScript types derived from it. That keeps runtime validation and compile-time types in sync and reduces hallucinations.

Why schema-first works

  • Reduces ambiguity: concrete field names and types are explicit.
  • Enables validation: runtime checks catch invalid inputs quickly. See governance best practices for micro apps in Micro Apps at Scale.
  • Makes tests deterministic: you can generate valid/invalid fixtures.

Example micro app: "Where2Eat" style REST endpoint + React card

We’ll generate four artefacts: a Zod schema + TypeScript types, a Next.js API endpoint, a typed React component, and tests (Vitest + @testing-library). Below are the prompt templates and the code you should expect from the LLM.

Prompt: schema-first generation (template)

Generate a Zod schema and matching TypeScript types for a "RestaurantRecommendation" object. Return ONLY JSON with keys: { "schema": "", "types": "" }. Fields:
- id: uuid string
- name: string (max 100 chars)
- cuisine: string
- rating: number (0-5, float)
- priceLevel: 1|2|3|4
- location: { lat: number, lon: number }
- tags?: string[]
Use zod for runtime validation and export a TypeScript type `RestaurantRecommendation`. No explanation.

Example expected response (LLM output):

import { z } from 'zod'

export const RestaurantRecommendationSchema = z.object({
  id: z.string().uuid(),
  name: z.string().max(100),
  cuisine: z.string(),
  rating: z.number().min(0).max(5),
  priceLevel: z.union([z.literal(1), z.literal(2), z.literal(3), z.literal(4)]),
  location: z.object({ lat: z.number(), lon: z.number() }),
  tags: z.array(z.string()).optional(),
})

export type RestaurantRecommendation = z.infer

Generate a typed API route

Next, ask the LLM to generate an API route that uses the schema to validate incoming data and returns typed responses. Request structured output (files keyed by path). Prefer function-calling or JSON output so your tooling can write files automatically.

Prompt: API route + tests

Produce a TypeScript file for a Next.js 14+/Edge API handler POST /api/recommend. It should:
- validate request.body against RestaurantRecommendationSchema
- insert into a mock in-memory store (exported) and return 201 with the created object
- export tests using Vitest that check valid and invalid payloads
Return JSON: { "files": { "src/pages/api/recommend.ts": "...", "src/lib/store.ts": "...", "tests/api/recommend.test.ts": "..." } }
No extra text.

What to expect in the handler

import { RestaurantRecommendationSchema } from '../../schemas/restaurant'
import { NextResponse } from 'next/server'
import { store } from '../../lib/store'

export async function POST(req: Request) {
  const body = await req.json()
  const result = RestaurantRecommendationSchema.safeParse(body)
  if (!result.success) {
    return NextResponse.json({ error: result.error.format() }, { status: 422 })
  }
  const item = { ...result.data }
  store.add(item)
  return NextResponse.json(item, { status: 201 })
}

Generate a typed React component

Now ask the model for a presentational component that takes a typed prop and renders a card. Ask for accessibility and PropTypes-equivalent TypeScript props.

Prompt: React component

Generate a React functional component `RecommendationCard` in TypeScript.
- Props: `item: RestaurantRecommendation`
- Use semantic markup and accessible image + alt
- Export a small CSS module example
Return JSON with files: "src/components/RecommendationCard.tsx" and "src/components/RecommendationCard.module.css"

Component example

import React from 'react'
import { RestaurantRecommendation } from '../../schemas/restaurant'
import styles from './RecommendationCard.module.css'

export function RecommendationCard({ item }: { item: RestaurantRecommendation }) {
  return (
    <article className={styles.card} aria-labelledby={"r-" + item.id}>
      <h3 id={'r-' + item.id}>{item.name} <small>({item.cuisine})</small></h3>
      <p>Rating: {item.rating} — Price: {"$".repeat(item.priceLevel)}</p>
      <address>Lat: {item.location.lat}, Lon: {item.location.lon}</address>
    </article>
  )
}

Tests: types + runtime

Ask the LLM to generate both unit tests and type-level tests using tsd or dts-jest. Type-level tests help prevent regressions in exported types. Combine that with runtime tests in Vitest.

Example test checklist

  • Vitest unit tests for API success and validation errors
  • Component snapshot / accessibility test
  • tsd test asserting RestaurantRecommendation contains expected keys/types

Automation: Generating commit-ready PRs

Once the LLM produces files, automate branch creation, linting, typecheck, tests, and PR creation. Below is a reliable sequence that works well in 2026 toolchains.

  1. Create a feature branch: feature/llm/where2eat
  2. Write files from LLM output to disk
  3. Run local linters + formatters (prettier, eslint --fix)
  4. Run pnpm typecheck and pnpm test
  5. Commit with a conventional commit message generated by the LLM
  6. Push and create PR using gh or GitHub API; include a checklist and change summary generated by the LLM

Sample CLI glue (node script)

#!/usr/bin/env node
const { execSync } = require('child_process')
const fs = require('fs')

// files is a { path: content } object produced by the LLM
function writeFiles(files) {
  for (const [path, content] of Object.entries(files)) {
    fs.mkdirSync(require('path').dirname(path), { recursive: true })
    fs.writeFileSync(path, content)
  }
}

function run(cmd) { execSync(cmd, { stdio: 'inherit' }) }

writeFiles(JSON.parse(process.env.LLM_FILES))
run('pnpm -w install')
run('pnpm -w eslint --fix')
run('pnpm -w tsc -p tsconfig.json --noEmit')
run('pnpm -w vitest run')
run('git checkout -b feature/llm/where2eat')
run('git add . && git commit -m "feat: add where2eat recommendation API and components (LLM generated)"')
run('git push -u origin HEAD')
run('gh pr create --fill')

CI integration: GitHub Actions template

Include a CI workflow that enforces safety checks produced by the LLM pipeline.

name: LLM Generated Checks
on: [push, pull_request]

jobs:
  lint-test-type:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v2
        with:
          version: 8
      - run: pnpm install
      - run: pnpm -w eslint . --max-warnings=0
      - run: pnpm -w tsc -p tsconfig.json --noEmit
      - run: pnpm -w vitest run --run
  dependency-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: snyk/actions/setup@v2
        with:
          version: '1.0.0'
      - run: snyk test --severity-threshold=high

Safety checklist: avoid the common traps

LLMs can hallucinate APIs, create insecure patterns, or suggest unsafe dependency versions. Integrate these checks in your generator and CI:

  • Runtime validation: require Zod/io-ts checks for all external data. See the governance checklist in Micro Apps at Scale.
  • Type checks: tsc must pass with --noEmit.
  • Tests: unit tests + type-level tests + property-based checks for edge cases.
  • Dependency pinning and scanning: lockfile, Snyk or CodeQL, and reproducible devcontainer. For security deep dives, consult Security & Reliability.
  • Human review: include a LLM-generated reviewer checklist in PR description.

LLM review prompts: make the model an assistant reviewer

LLMs are great at spotting obvious problems if prompted correctly. Use a two-step review: an automated LLM pass for low-risk issues and a human reviewer for high-risk ones.

Automated LLM reviewer prompt (template)

You are a TypeScript code reviewer. Analyze the following files. For each file, return a JSON with: { file: string, issues: [{ severity: "info|warning|error", message: string, line?: number }], fixes?: [{ patch: string }] }.
Checks:
- Type errors or suspicious any
- Missing runtime validation for external inputs
- Insecure patterns (eval, child_process exec with unsanitized input)
- Missing tests for validation logic
- Large dependencies introduced (>2)
Return only JSON.

Files:
---
<paste files here>
---

Use the reviewer output to annotate PRs automatically or as a checklist for human reviewers.

Handling hallucinations

Hallucinations are still the #1 LLM risk for code generation. Mitigate them by:

  • Requesting unit tests and fixtures as part of generation. Tests often expose hallucinated behavior quickly.
  • Using function-calling or structured JSON responses so the LLM can’t silently add unexpected files. Structured outputs and function-calling are covered in pieces like AI annotations and structured outputs.
  • Requiring PRs include a short section: "Edge cases the LLM might have missed"—and have human reviewers confirm.

Here are strategies that became mainstream by late 2025 and are standard in 2026:

  • Structured outputs everywhere: OpenAI/Anthropic/Gemini function-calling or JSON schemas ensure predictable file outputs. Tools that orchestrate this pattern are explored in AI annotations.
  • LLM toolchains: LangChain-style orchestration that runs tests automatically on generated code and loops back failed test details to the LLM for repair.
  • Editor-integrated generation: VS Code and JetBrains plugins that create a sandbox branch, run local checks, and present a ready PR suggestion in-editor.
  • Type-aware agents: agents that parse your tsconfig and package.json to generate code that respects your monorepo boundaries and path aliases. For edge-first, cost-aware work patterns and microteam guidance, see Edge‑First, Cost‑Aware Strategies for Microteams.

Example: an LLM repair loop

Automate an inner loop: generate → test → fail → repair → repeat. Many teams now run a short workflow where failed tests and type errors are sent back to the LLM with context and a fixed prompt like:

Your previous patch failed these tests:
<paste failing tests and stack traces>
Please propose minimal code changes to fix the failures. Return JSON with modified file paths and patched contents. Include a short explanation of the root cause and how the patch fixes it.

PR templates: include LLM provenance

Make LLM provenance transparent. Add a PR template with an LLM section so reviewers know what was generated and what was human-reviewed.

## Summary
- What changed

## Generated by
- Model: ChatGPT-4o / Claude-2 / Gemini
- Prompt: <linked prompt file>

## Safety Checklist (auto-generated)
- [ ] runtime validation present
- [ ] typecheck passes
- [ ] tests added
- [ ] dependency scan OK

## Manual review notes
- Areas for human review: security, external API usage, data retention

Case study: micro app for a small team

Context: a small product team in late 2025 used LLM tooling to build a micro-app for internal event matching. They used the above patterns: schema-first generation, Zod validation, type-level tests, and an LLM repair loop. The result:

  • Initial dev time reduced from 2 days to 5 hours
  • 0 runtime type errors in production’s first 2 weeks
  • One near-miss: a dependency transitive upgrade introduced a breaking change; CI blocked merge due to dependency scanning

The team credits success to the safety checklist and automated test loop—they still performed human code reviews for security-sensitive code.

Checklist: What to enforce before merging LLM-generated code

  1. Typecheck passes with --noEmit
  2. Unit tests & type-level tests pass
  3. Runtime validation covers all external boundaries
  4. Dependency scan has no high/critical alerts
  5. PR includes the original prompts and LLM model/version
  6. At least one human review for security-sensitive changes

Prompt bank: quick templates you can use

1) Generate typed files (single call)

Produce a JSON response: { files: { [path]: content } } with a TypeScript Zod schema, React component, API route, and tests for X domain. Use tsconfig paths: { "@/components": "src/components" }. Use minimal dependencies. Explain nothing.

2) Repair loop

Your patch failed tests. Return only JSON: { patches: [{ path, content }] , explanation: string } and keep patches minimal.

3) PR description generator

Given these diffs, produce a PR description with: short summary, testing instructions, rollout plan, risk assessment, and list of files changed. Include an automated checklist for reviewers.

Limitations and best practices

  • Do not fully trust generated cryptography or auth code—human review required.
  • Prefer small, incremental PRs from LLM outputs to reduce blast radius.
  • Lock dependencies and pin Node/TS versions in CI for reproducibility. For observability considerations, see Top Cloud Cost Observability Tools (review).
  • Keep prompts short but specific. Provide code examples when you need a precise shape.
By 2026, LLMs are co-pilots — not autopilots. Treat them like powerful assistants that accelerate the repetitive parts of development, and enforce human-in-the-loop for judgement calls.

Actionable takeaways

  • Always start schema-first: request Zod/io-ts schemas and matching TS types. See governance and scaling notes in Micro Apps at Scale.
  • Enforce CI gates: typecheck, tests, and dependency scans must be required checks. For CI/DevOps playbook patterns, see Advanced DevOps.
  • Use structured outputs: require JSON or function-calling to prevent unexpected content. Function-calling patterns are covered in AI annotations.
  • Automate a repair loop: feed failing tests back to the LLM for minimal patches.
  • Document provenance: include model, prompt, and safety checklist in the PR.

Final notes

LLMs have made micro app creation dramatically faster, as seen in the rise of "vibe-coded" apps and the broader AI-driven tooling ecosystem in 2025–2026. The technical challenge now is not that LLMs can write code, but that teams can safely integrate generated code into production-grade repos. Use schema-first generation, short repair loops, and enforce CI and human review to harness LLM speed without sacrificing reliability.

Call to action

Try this pattern on a small micro app this week: pick a single endpoint, use the schema-first prompt templates above, run the repair loop, and protect merges with type and security checks. If you want a ready-made starter, download an opinionated GitHub Actions + prompt templates repo I maintain (link in the comments) and adapt it to your tsconfig. Share your results and we’ll iterate on the prompt bank together.

Advertisement

Related Topics

#LLMs#tooling#automation
t

typescript

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-24T05:26:00.005Z