Automated remediation with TypeScript Lambdas: fixing common Security Hub findings
Build safe TypeScript Lambda playbooks to auto-remediate Security Hub findings with audit trails, tests, and rollout guardrails.
Security Hub is most useful when it does more than surface findings—it should help you close the loop. For teams running AWS workloads at scale, that means turning repeated alerts into safe, audited remediation workflows that can act automatically when the risk is well understood. In the AWS Foundational Security Best Practices (FSBP) standard, that often means controls around logging, encryption, key rotation, and IAM hygiene. AWS describes FSBP as a compilation of best practices that continuously evaluates accounts and workloads and gives prescriptive guidance to improve posture, which makes it a strong candidate for automation rather than one-off manual fixes. For broader context on the standard itself, see our guide to AWS Foundational Security Best Practices in Security Hub.
This guide shows how to build TypeScript Lambdas that remediate frequent FSBP findings such as enabling CloudTrail, enforcing encryption, and rotating keys. It focuses on production-grade patterns: safe rollout, dry-run support, audit trails, idempotency, and testability. If you are already thinking about more general automation governance, the same operating model applies to enterprise agentic automation architectures and to explainable action systems like glass-box identity and traceable agent actions. The difference here is that we are dealing with security changes, so reliability and evidence matter as much as speed.
Why automated remediation belongs in your security program
Security findings are operational debt, not just notifications
Most organizations do not struggle because they lack findings; they struggle because findings accumulate faster than humans can respond. Security Hub can continuously evaluate standards and show drift, but a queue of unresolved alerts still leaves the actual risk in place. Automated remediation turns repetitive, deterministic fixes into a machine-operated workflow, which reduces the time between detection and correction. That does not mean every finding should be auto-fixed, but the common, low-ambiguity ones absolutely should be candidates.
A good way to think about it is the same way ops teams think about reliability workflows: some work is high judgment, some is low judgment, and the low-judgment work should be standardized. If you have ever built reliable event delivery systems, the same ideas apply—dedupe, retries, visibility, and exactly-once behavior are difficult, but they are worth solving because they make the system trustworthy. Security automation also benefits from measurement, so it helps to define a baseline, quantify time-to-remediate, and prove that automation reduces mean time exposed. That operational framing is similar to how teams justify change in other domains, like building a business case for replacing manual workflows in paper-workflow modernization.
Which FSBP findings are best for automation
Not every finding belongs in the same playbook. The best candidates are findings with a clear desired end state, limited side effects, and a reversible change. Typical examples include enabling CloudTrail, ensuring EBS or S3 encryption, requiring security logging, and rotating keys older than policy allows. These controls have a predictable success condition, which makes them suitable for Lambda-driven remediation.
By contrast, findings that require business context, app redesign, or migration work should usually go into a human review queue. For example, a public S3 bucket exposing a static website can have legitimate reasons depending on architecture, so the remediation may involve exception handling rather than immediate closure. This is why strong automation programs include policy gates and explicit allowlists. The goal is not to remove humans from security; it is to remove humans from repetitive work that should have a documented, safe default.
What good remediation looks like
Effective remediation is more than “call the API and hope.” It includes intent validation, a dry-run mode, a rollback strategy when possible, and a write-audit trail that explains what happened and why. It should also be idempotent so that reprocessing the same finding does not create duplicate side effects or noisy failures. When you approach remediation this way, you get a durable control plane rather than a fragile script collection.
Pro tip: treat every auto-remediation as a change record. If you cannot answer what changed, who approved it, when it ran, and how to undo it, it is not ready for production automation.
Blueprint: the TypeScript Lambda remediation architecture
Event flow from Security Hub to action
A common architecture is Security Hub finding event → EventBridge rule → Lambda → service API call → audit log → notification. EventBridge filters on the finding type, severity, product name, or control ID so that only eligible findings invoke the function. Lambda then loads a remediation policy, verifies the finding against that policy, and applies the change. Finally, it writes structured audit data to CloudWatch Logs, DynamoDB, or an S3 evidence bucket.
This pattern works because it separates detection, decision, and execution. You can also extend it with Step Functions when a remediation needs multiple steps, approvals, or long-running retries. Teams already using orchestration patterns will recognize the value of explicit states and durable checkpoints, much like the design tradeoffs discussed in real-time capacity systems and other high-availability workflows. For most simple FSBP fixes, though, a Lambda plus a well-defined state store is enough.
Recommended stack for TypeScript Lambdas
Use TypeScript with AWS SDK v3, a strict tsconfig, and a small runtime footprint. Prefer modular service clients such as @aws-sdk/client-cloudtrail, @aws-sdk/client-iam, and @aws-sdk/client-s3 so that your deployment package stays lean. Add schema validation for incoming events, for example with Zod or similar, so malformed events fail fast and predictably. And use a shared utility layer for logging, idempotency keys, policy lookups, and structured metrics.
This is also where infrastructure-as-code matters. Define the Lambda, EventBridge rules, permissions, and evidence resources together so the remediation path is reproducible. If your organization already practices strong release discipline, the operational mindset is similar to automation patterns that replace manual workflows: standardize the process, reduce bespoke steps, and keep every handoff observable. Security automation should feel boring in the best possible way.
Core components and responsibilities
| Component | Responsibility | Why it matters |
|---|---|---|
| Security Hub | Detects FSBP findings | Provides normalized security signals and control IDs |
| EventBridge | Routes eligible findings | Filters for severity, resource type, or specific controls |
| TypeScript Lambda | Executes remediation | Enforces policy, idempotency, and safe side effects |
| DynamoDB or S3 | Stores remediation state and evidence | Supports auditability and deduplication |
| CloudWatch Logs | Records structured traces | Enables troubleshooting and compliance evidence |
Blueprint 1: auto-enable CloudTrail when Security Hub flags missing logging
Why this is a high-value remediation
CloudTrail is foundational because many investigations depend on API history, identity context, and action chronology. If a Security Hub control indicates CloudTrail is not enabled, the risk is not abstract: you may have no authoritative record of who changed what. Automating this remediation is one of the safest and most valuable things you can do because the correct end state is clear and the change is normally reversible. It also creates a strong audit chain for future incidents.
The remediation should distinguish between organization trails and account-level trails, because the correct API calls and IAM permissions differ. It should also verify whether logging is already active before making changes. If you are standardizing on control-based response playbooks, this is the kind of case where automation shines: a single deterministic fix, a clean completion signal, and immediate reduction in exposure.
TypeScript Lambda example for CloudTrail enablement
import { CloudTrailClient, DescribeTrailsCommand, StartLoggingCommand, CreateTrailCommand, PutEventSelectorsCommand } from "@aws-sdk/client-cloudtrail";
const client = new CloudTrailClient({});
export async function handler(event: any) {
const accountId = event.detail.findings?.[0]?.AwsAccountId;
const trails = await client.send(new DescribeTrailsCommand({ includeShadowTrails: false }));
const existing = trails.trailList?.find(t => t.IsMultiRegionTrail || t.Name === "org-trail");
if (!existing) {
await client.send(new CreateTrailCommand({
Name: "org-trail",
IsMultiRegionTrail: true,
IsOrganizationTrail: true,
S3BucketName: process.env.TRAIL_BUCKET!
}));
}
await client.send(new StartLoggingCommand({ Name: "org-trail" }));
return { ok: true, accountId };
}In production, this function should not hardcode the bucket or assume the trail exists without verification. It should also write an evidence record that includes the finding ID, action taken, and the final verification result. If you want to model the workflow before production, start with a controlled pilot in a single account and pair it with the principles in rapid incident response playbooks: quick action, clear ownership, and documented outcomes.
Safety checks before enabling logging
Before any CloudTrail change, verify that the target account is in the intended organization boundary and that the Lambda has permission only to create or start the exact trail you expect. Require an explicit allowlist of account IDs, regions, or organizational units. Also ensure the trail destination bucket has the correct bucket policy and encryption settings already in place. A remediation that enables logging but writes into an insecure destination simply moves the problem instead of solving it.
To reduce blast radius, use a two-stage rollout. First, deploy in dry-run mode and log the intended action. Second, enable live execution only for a narrow set of accounts where you can verify side effects quickly. This staged release pattern is common in operational systems and is especially important in security automation, where silent failure can be costly. It resembles the careful rollout logic used when teams replace brittle manual processes with repeatable automation patterns in revenue-critical environments.
Blueprint 2: enforce encryption for storage and data services
The remediation pattern for encryption findings
Encryption findings are among the most common FSBP alerts, and many can be remediated automatically if the resource supports an in-place update. The key idea is to inspect the current resource configuration, determine whether encryption is off, and then apply the correct setting or recreate the resource in a secure form. For example, S3 buckets can have default encryption enabled, EBS volumes can be encrypted by default, and some databases or caches can be updated with encryption-at-rest settings. The exact remediation depends on the service, but the decision model is similar.
Because some encryption changes are disruptive or impossible in place, your playbook should classify findings into safe in-place, safe with restart, and manual migration required. This classification prevents a Lambda from making a well-intentioned but destructive change. It also helps security teams communicate clearly with platform teams about expected impact. The same discipline is visible in systems that balance automation and human oversight, like the practical approaches described in privacy-first personalization architectures.
Example: default S3 bucket encryption
import { S3Client, GetBucketEncryptionCommand, PutBucketEncryptionCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({});
export async function enforceBucketEncryption(bucket: string, kmsKeyId?: string) {
try {
await s3.send(new GetBucketEncryptionCommand({ Bucket: bucket }));
return { changed: false, reason: "already-encrypted" };
} catch {
await s3.send(new PutBucketEncryptionCommand({
Bucket: bucket,
ServerSideEncryptionConfiguration: {
Rules: [{
ApplyServerSideEncryptionByDefault: {
SSEAlgorithm: kmsKeyId ? "aws:kms" : "AES256",
KMSMasterKeyID: kmsKeyId
}
}]
}
}));
return { changed: true };
}
}Notice how this code first tests state and only then changes it. That makes the function idempotent and safe to retry. In a real deployment, you would also verify the bucket owner, preserve existing KMS alias conventions, and ensure the Lambda role can use the selected key. For practical resilience thinking, compare this with systems design work on reliable event architecture: validate, retry safely, and always know your terminal state.
Encryption rollout strategy
Start with default encryption controls that are almost universally safe, then move toward more opinionated controls like KMS-key enforcement. Default encryption is often a low-friction win because it protects data without changing the application contract. KMS enforcement, by contrast, can affect cross-account access patterns, key policies, and application permissions, so it needs a more deliberate rollout. That is why your remediation catalog should include a risk rating per action and a required approval level.
When the fix could break downstream services, use a detect-and-alert mode first. This gives teams time to correct dependencies and prevents surprise outages. Security automation should be progressive, not punitive. In practice, the best programs look more like careful operations engineering than like brute-force compliance enforcement.
Blueprint 3: rotate IAM keys and reduce credential exposure
Why key rotation matters in the remediation catalog
Long-lived access keys are one of the most dangerous forms of technical debt because they can outlive their creators, their original purpose, and the security assumptions of the system. When Security Hub flags old or exposed IAM keys, the remedy should usually be immediate attention and a clear path to rotation. In some environments, the ideal result is full elimination of long-lived keys in favor of roles, federation, or workload identity. Until you get there, automated rotation is an essential control.
A Lambda can safely enforce key rotation policies by detecting keys beyond age thresholds and creating a controlled rotation workflow. That workflow should not simply delete the old key instantly. Instead, it should create a new key, update the consuming system if it is managed, verify the new key works, and only then deactivate or delete the old one. For teams working through identity modernization, this is the same careful thinking that underpins stronger identity and traceability programs, such as the principles in explainable identity actions.
Rotation workflow design
Build rotation as a multi-step state machine rather than a single Lambda call. The steps usually include discover, create, deploy, verify, deactivate, and archive. Each step should write an evidence record so a human can see what changed and where the process might have failed. If a consumer cannot be updated automatically, the workflow should stop and create an actionable ticket rather than forcing a destructive change.
For application keys, a managed secrets distribution path is critical. If you rotate keys without knowing where they are used, you risk outages and hidden fallbacks. This is similar to the risk teams face when they modernize legacy systems without a migration map, which is why structured transition guidance like legacy migration checklists is so valuable. Rotation is a migration problem as much as a security problem.
Prefer role-based access over key-based access
Automation should always push the environment toward roles, OIDC federation, IAM Identity Center, or workload identity wherever possible. A remediation rule that simply rotates a key forever can become a treadmill. A better rule is to classify key-based access as temporary and then progressively reduce the scope of what still depends on it. Your Lambda can even use the finding as a trigger to create a migration task for the owning team, which turns remediation into architectural improvement.
Pro tip: if a key rotation playbook repeatedly fails because a service cannot tolerate credential updates, that is not a rotation bug—it is a signal that the service still depends on an outdated authentication model.
Safe rollout patterns for production remediation
Dry-run, shadow mode, and approval gates
Safe rollout is what separates a useful remediation framework from an incident generator. The first step is dry-run mode, where the Lambda validates the finding and logs the action it would take without changing anything. The second step is shadow mode, where it performs all reads and decisions but writes remediation recommendations instead of executing them. The third step is limited live mode, enabled only for approved accounts or controls. This staged approach protects you from policy mistakes and unexpected resource shapes.
For highly sensitive actions, combine automation with a manual approval gate. For example, a CloudTrail enablement might be auto-approved, but a KMS key policy change might require a security or platform engineer to review the plan. A hybrid model is not a weakness; it is a mature control structure. The same discipline is widely used in enterprise operations and risk management, including in frameworks like cyber-resilience risk registers.
Idempotency and deduplication
Security Hub findings can reappear until the control re-evaluates, which means your automation must withstand repeated events. Use finding ARN, control ID, resource ID, and a remediation version as a composite idempotency key. Store that key in DynamoDB with a TTL so duplicate events short-circuit cleanly. This prevents repeated writes, duplicate tickets, and log spam.
Also design the workflow so that a partial success can resume safely. For example, if CloudTrail creation succeeds but startup fails due to a permission issue, rerunning the event should continue from the failed step rather than creating a second trail. This is one of the most important implementation details in remediation systems, because retries are not a corner case—they are the normal operating condition in distributed systems.
Blast-radius control and policy boundaries
Restrict the Lambda role to the smallest set of permissions necessary for the supported remediations. A function that enables CloudTrail should not also have broad permissions to edit unrelated resources. Separate functions by control family when possible, or use a policy dispatcher that only grants temporary elevated permissions through a constrained role assumption. You should also limit which Security Hub findings are eligible by account, OU, region, and control ID.
This is the same principle used in disciplined operational scaling: narrow authority, explicit boundaries, and clear escalation paths. Teams that learn this lesson in other technical domains, like capacity planning or large-scale analytics operations, know that broad automation without guardrails creates hidden debt. In security, that debt becomes an audit issue very quickly.
Audit trails, evidence, and compliance alignment
What every remediation event should record
Your audit record should include the finding ID, control name, resource identifier, pre-change state, post-change state, action taken, Lambda version, execution timestamp, and correlation ID. If human approval was involved, record the approver and approval timestamp as well. The audit record should be machine-readable and ideally append-only. That makes it usable both for compliance review and for engineering debugging.
Store evidence separately from operational logs if possible. Logs are for troubleshooting; evidence is for governance. In mature programs, the evidence store becomes a timeline of security posture improvements that can be queried during audits, internal reviews, or incident retrospectives. This discipline echoes the value of traceability in systems that must explain why a decision was made, not just what happened.
Link remediation to compliance narratives
Security Hub and FSBP findings often map to internal controls and external frameworks. When you automate remediation, you are not just fixing a resource; you are generating proof that your control environment is continuously operating. That proof matters to auditors, risk teams, and leadership. Automated remediation can reduce the lag between detection and correction in a way that is much easier to demonstrate than manual ticket resolution.
This is also where metrics matter. Track percentage of auto-remediated findings, average time to remediation, false-positive rate, and the number of remediations that required human escalation. If you want to understand why disciplined measurement changes the conversation with finance or leadership, the same logic applies in the broader automation world described in automation ROI measurement. A good dashboard turns security work from a cost center into a measurable operational improvement.
Audit-ready logging patterns
Use structured JSON logs with stable field names, and include a remediation decision object. Avoid free-form string logs for anything compliance-sensitive. If you centralize logs in CloudWatch and export evidence to S3, use lifecycle rules and access controls that match your retention policy. Encrypt the evidence store, and make sure retention aligns with your legal and regulatory obligations.
Finally, test the audit trail itself. A remediation system that changes resources but fails to record those changes is not audit-ready. The point of automation in security is not only to act faster; it is to produce reliable evidence faster too.
Testing strategies for TypeScript remediation Lambdas
Unit tests for decision logic
Most remediation bugs live in decision logic, not in the AWS API call itself. Write unit tests for eligibility checks, finding classification, idempotency key generation, and action selection. Mock the AWS SDK and assert that no mutation calls are made in dry-run mode. Also test negative cases: wrong account, unsupported region, missing tags, stale finding, or conflicting policy.
Because TypeScript gives you stronger type guarantees, you can encode many assumptions in interfaces and discriminated unions. That reduces the number of runtime branches you need to test. For example, represent remediation actions as a union type like "enable-cloudtrail" | "enforce-encryption" | "rotate-iam-key" and validate the incoming finding against a schema before dispatch. This helps keep the code readable even as the playbook grows.
Integration tests with AWS mocks and sandbox accounts
Use local mocks for fast feedback, but do not stop there. Run integration tests in a dedicated sandbox account with representative IAM, bucket, and trail configurations. The best integration tests verify both success and rollback conditions, and they assert the evidence record as part of the expected output. You want confidence not just that the API call works, but that the whole remediation lifecycle is correct.
For teams building large operational systems, test strategy is often the difference between a trusted automation layer and one everyone fears. If you need a useful mental model, think of how teams validate event-heavy systems in production-like environments, similar to the reliability themes in high-volume capacity systems. The more realistic your test environment, the less surprising production becomes.
Failure injection and rollback drills
Security remediation should be exercised under failure. Simulate access denied, throttling, missing dependencies, and partial completion. Verify that retries do not duplicate work and that failures produce clear operational signals. You should also run periodic rollback drills to confirm that reversible changes can be undone by the same team that made them.
This is especially important for encryption and key rotation remediations, where the most likely operational risk is not the security change itself but its effect on dependent systems. A rollback drill reveals whether your observability, access control, and evidence collection are actually usable under pressure. Without this, you only know your code works in the happy path.
Operational guardrails and escalation design
When to automate and when to page a human
The simplest rule is this: automate deterministic fixes, page humans for ambiguous or potentially destructive ones. If a finding can be remediated with a well-known state transition and minimal business context, it is a good automation candidate. If remediation might affect availability, data access, or legal posture in a way that depends on the application owner, use a human gate. This distinction keeps automation helpful instead of reckless.
One practical method is to assign every control a remediation class: auto, auto-with-approval, or manual-only. Over time, some manual-only controls can move into the auto-with-approval bucket as you improve tooling and confidence. That continuous improvement mindset is similar to how organizations evolve operational playbooks in fast-moving environments, whether in security, infrastructure, or even broader change management.
How to communicate remediation outcomes
Every remediation should emit a concise outcome message that can be consumed by Slack, Jira, email, or a SIEM. The message should state what was changed, which control it addressed, whether verification succeeded, and whether human follow-up is needed. Keep the language specific and avoid vague success statements. “CloudTrail enabled for account 1234; evidence stored in s3://security-evidence/...” is much better than “Remediation complete.”
That style of reporting helps operations, security, and audit teams speak the same language. It also reduces the temptation to manually inspect every change, because the system already tells you what matters. Good automation makes confidence visible.
Metrics that prove the program works
Track at least five metrics: time to remediated finding, percentage auto-remediated, number of retries per control, percentage of remediations requiring human escalation, and number of regressions detected in testing. If you want a sixth metric, add total evidence artifacts generated per month. Those numbers tell you whether the system is becoming more effective and whether policy is narrowing or expanding in healthy ways.
The most persuasive security programs use metrics to demonstrate reduced risk and reduced operational burden. That is how automation earns trust. The same lesson appears in other domains where stakeholders need proof before they support scale, from workflow replacement business cases to other enterprise modernization efforts.
A practical rollout plan for your first 30 days
Week 1: inventory and classify
Start by identifying the top FSBP findings in your environment and grouping them by remediation type. Focus on the highest-volume alerts first, especially those that are repetitive and low risk. Confirm which findings can be safely auto-fixed, which need approval, and which must remain manual. At the same time, define your evidence schema and your logging standard.
Do not begin by writing code. Begin by writing policy. A remediation system without a policy model becomes an arbitrary script, and arbitrary scripts do not age well in security operations. The best programs treat policy as the contract and code as the implementation of that contract.
Week 2: build the first Lambda and dry-run it
Implement one remediation end to end, ideally CloudTrail enablement or S3 encryption defaulting, because both are common and easy to verify. Add dry-run mode, structured logging, and a DynamoDB dedupe record. Connect the Lambda to EventBridge but keep execution off or restricted to a sandbox. Validate the output against the audit schema and confirm that security and platform stakeholders can interpret it.
Week 3: test in sandbox and expand controls
Run success, failure, and retry tests against representative resources. Then add a second remediation, such as IAM key rotation or bucket encryption enforcement. Make sure the two remediations share common utilities but maintain separate policy decisions. This is where a clean TypeScript design pays off: shared infrastructure code, distinct control logic, and clear type boundaries.
Week 4: limited production release
Enable the automation in a small production scope, such as a single account or OU. Review all audit records, confirm no unexpected side effects, and tune alerting for failures and manual escalations. Once you trust the first control family, expand gradually. A controlled rollout is slower than a big bang, but it is dramatically safer and easier to defend.
FAQ
Can every Security Hub finding be auto-remediated?
No. Findings that depend on business context, service design, or potential downtime should not be automatically fixed. The safest candidates are deterministic configuration drifts with a clear desired state and minimal side effects.
Should I use one Lambda for all remediations?
Usually not. A single dispatcher can work for small environments, but separate functions or modules per control family are easier to test, secure, and reason about. Splitting by action also reduces the blast radius of a bad deployment.
How do I prevent repeated remediation of the same finding?
Use idempotency keys derived from finding ID, resource ID, control ID, and remediation version. Store the key in DynamoDB or another durable state store with a TTL so reprocessed events can be ignored safely.
What is the best first remediation to automate?
CloudTrail enablement is often one of the best first choices because the end state is clear, the risk is high, and the fix is easy to verify. Default encryption for S3 or EBS is another strong candidate if your environment supports it.
How do I make remediation audit-friendly?
Record the finding, action, before-and-after state, execution identity, timestamp, and evidence location. Use structured logs and store durable evidence in a restricted bucket or database so auditors can verify what changed and why.
What testing matters most?
Unit tests for decision logic matter most at first, followed by sandbox integration tests. After that, failure injection and rollback drills become essential because they reveal whether retries, permissions, and evidence handling behave correctly under stress.
Conclusion: build security automation like a product
Automated remediation is not just a convenience feature; it is a security control that shortens exposure windows and improves consistency. With TypeScript Lambdas, you can build remediations that are precise, testable, and maintainable, while still preserving the auditability and human oversight that security teams need. Start with the high-confidence controls: enable CloudTrail, enforce encryption, and rotate risky IAM keys. Then add guardrails, evidence, and rollout discipline so the system remains trustworthy as it grows.
The real win is not simply fewer findings. It is an operating model where remediation is fast, explainable, and repeatable. That is the point at which Security Hub stops being a dashboard and becomes part of your enforcement layer. If you want to keep expanding your automation program, a useful next step is to study broader patterns in operational automation architecture and the governance principles behind crawl and bot governance, because the same themes—policy, identity, auditability, and safe action—show up everywhere automation touches production systems.
Related Reading
- Glass-Box AI Meets Identity: Making Agent Actions Explainable and Traceable - A useful model for traceable action logs and human-readable automation intent.
- Designing Reliable Webhook Architectures for Payment Event Delivery - Strong patterns for retries, dedupe, and event delivery correctness.
- From Viral Lie to Boardroom Response: A Rapid Playbook for Deepfake Incidents - Helpful for thinking about rapid, auditable response workflows.
- IT Project Risk Register + Cyber-Resilience Scoring Template in Excel - A practical way to track rollout risk and remediation maturity.
- How to Track AI Automation ROI Before Finance Asks the Hard Questions - A good framework for proving automation value with metrics.
Related Topics
Jordan Ellis
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Map AWS Foundational Security Best Practices to TypeScript CDK checks
Build a model-agnostic TypeScript code-review agent inspired by Kodus
Integrating Kodus AI with TypeScript monorepos: practical patterns

Building platform-friendly web tools for PCB review with TypeScript and WebAssembly
From PCB to Dashboard: Building EV electronics monitoring tools with TypeScript
From Our Network
Trending stories across our publication group